oxen-core/src/cryptonote_core/service_node_voting.cpp

808 lines
29 KiB
C++
Raw Normal View History

// Copyright (c) 2018, The Loki Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "service_node_voting.h"
2023-04-13 15:50:13 +02:00
#include <string>
#include <vector>
#include "checkpoints/checkpoints.h"
#include "common/hex.h"
#include "common/util.h"
#include "cryptonote_basic/connection_context.h"
#include "cryptonote_basic/cryptonote_format_utils.h"
2023-04-13 15:50:13 +02:00
#include "cryptonote_basic/tx_extra.h"
#include "cryptonote_basic/verification_context.h"
#include "cryptonote_protocol/cryptonote_protocol_defs.h"
#include "epee/misc_log_ex.h"
#include "epee/string_tools.h"
2023-04-13 15:50:13 +02:00
#include "service_node_list.h"
using cryptonote::hf;
2023-04-13 15:50:13 +02:00
namespace service_nodes {
static auto logcat = log::Cat("service_nodes");
2023-04-13 15:50:13 +02:00
static crypto::hash make_state_change_vote_hash(
uint64_t block_height, uint32_t service_node_index, new_state state) {
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
uint16_t state_int = static_cast<uint16_t>(state);
auto buf = tools::memcpy_le(block_height, service_node_index, state_int);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
auto size = buf.size();
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if (state == new_state::deregister)
2023-04-13 15:50:13 +02:00
size -= sizeof(state_int); // Don't include state value for deregs (to be backwards
// compatible with pre-v12 dereg votes)
crypto::hash result;
crypto::cn_fast_hash(buf.data(), size, result);
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
crypto::signature make_signature_from_vote(
quorum_vote_t const& vote, const service_node_keys& keys) {
crypto::signature result = {};
2023-04-13 15:50:13 +02:00
switch (vote.type) {
default: {
log::info(logcat, "Unhandled vote type with value: {}", (int)vote.type);
assert("Unhandled vote type" == 0);
return result;
};
case quorum_type::obligations: {
crypto::hash hash = make_state_change_vote_hash(
vote.block_height, vote.state_change.worker_index, vote.state_change.state);
crypto::generate_signature(hash, keys.pub, keys.key, result);
} break;
case quorum_type::checkpointing: {
crypto::hash hash = vote.checkpoint.block_hash;
crypto::generate_signature(hash, keys.pub, keys.key, result);
} break;
}
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
crypto::signature make_signature_from_tx_state_change(
cryptonote::tx_extra_service_node_state_change const& state_change,
const service_node_keys& keys) {
crypto::signature result;
2023-04-13 15:50:13 +02:00
crypto::hash hash = make_state_change_vote_hash(
state_change.block_height, state_change.service_node_index, state_change.state);
crypto::generate_signature(hash, keys.pub, keys.key, result);
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
static bool bounds_check_worker_index(
service_nodes::quorum const& quorum,
uint32_t worker_index,
cryptonote::vote_verification_context* vvc) {
if (worker_index >= quorum.workers.size()) {
if (vvc)
vvc->m_worker_index_out_of_bounds = true;
log::info(
logcat,
"Quorum worker index in was out of bounds: {}, expected to be in range of: [0, {}",
worker_index,
quorum.workers.size());
return false;
}
return true;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
static bool bounds_check_validator_index(
service_nodes::quorum const& quorum,
uint32_t validator_index,
cryptonote::vote_verification_context* vvc) {
if (validator_index >= quorum.validators.size()) {
if (vvc)
vvc->m_validator_index_out_of_bounds = true;
log::info(
logcat,
"Validator's index was out of bounds: {}, expected to be in range of: [0, {}",
validator_index,
quorum.validators.size());
return false;
}
return true;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
static bool bad_tx(cryptonote::tx_verification_context& tvc) {
tvc.m_verifivation_failed = true;
return false;
2023-04-13 15:50:13 +02:00
}
bool verify_tx_state_change(
const cryptonote::tx_extra_service_node_state_change& state_change,
uint64_t latest_height,
cryptonote::tx_verification_context& tvc,
const service_nodes::quorum& quorum,
const hf hf_version) {
auto& vvc = tvc.m_vote_ctx;
if (state_change.state != new_state::deregister && hf_version < hf::hf12_checkpointing) {
log::info(logcat, "Non-deregister state changes are invalid before v12");
return bad_tx(tvc);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
}
2023-04-13 15:50:13 +02:00
if (state_change.state >= new_state::_count) {
log::info(
logcat,
"Unknown state change to new state: {}",
static_cast<uint16_t>(state_change.state));
return bad_tx(tvc);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
}
2023-04-13 15:50:13 +02:00
if (state_change.votes.size() < service_nodes::STATE_CHANGE_MIN_VOTES_TO_CHANGE_STATE) {
log::info(logcat, "Not enough votes");
vvc.m_not_enough_votes = true;
return bad_tx(tvc);
}
2023-04-13 15:50:13 +02:00
if (state_change.votes.size() > service_nodes::STATE_CHANGE_QUORUM_SIZE) {
log::info(logcat, "Too many votes");
return bad_tx(tvc);
}
if (!bounds_check_worker_index(quorum, state_change.service_node_index, &vvc))
2023-04-13 15:50:13 +02:00
return bad_tx(tvc);
// Check if state_change is too old or too new to hold onto
{
2023-04-13 15:50:13 +02:00
if (state_change.block_height >= latest_height) {
log::info(
logcat,
"Received state change tx for height: {} and service node: {}, is newer than "
"current height: {} blocks and has been rejected.",
state_change.block_height,
state_change.service_node_index,
latest_height);
vvc.m_invalid_block_height = true;
if (state_change.block_height >= latest_height + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER)
tvc.m_verifivation_failed = true;
return false;
}
2023-04-13 15:50:13 +02:00
if (latest_height >=
state_change.block_height + service_nodes::STATE_CHANGE_TX_LIFETIME_IN_BLOCKS) {
log::info(
logcat,
"Received state change tx for height: {} and service node: {}, is older than: "
"{} (current height: {}) blocks and has been rejected.",
state_change.block_height,
state_change.service_node_index,
service_nodes::STATE_CHANGE_TX_LIFETIME_IN_BLOCKS,
latest_height);
vvc.m_invalid_block_height = true;
if (latest_height >=
state_change.block_height + (service_nodes::STATE_CHANGE_TX_LIFETIME_IN_BLOCKS +
VOTE_OR_TX_VERIFY_HEIGHT_BUFFER))
tvc.m_verifivation_failed = true;
return false;
}
}
2023-04-13 15:50:13 +02:00
crypto::hash const hash = make_state_change_vote_hash(
state_change.block_height, state_change.service_node_index, state_change.state);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
std::array<int, service_nodes::STATE_CHANGE_QUORUM_SIZE> validator_set = {};
2023-04-13 15:50:13 +02:00
int validator_index_tracker = -1;
for (const auto& vote : state_change.votes) {
if (hf_version >= hf::hf13_enforce_checkpoints) // NOTE: After HF13, votes must be stored
// in ascending order
2019-08-14 06:52:51 +02:00
{
2023-04-13 15:50:13 +02:00
if (validator_index_tracker >= static_cast<int>(vote.validator_index)) {
vvc.m_votes_not_sorted = true;
log::info(
logcat,
"Vote validator index is not stored in ascending order, prev validator "
"index: {}, curr index: {}",
validator_index_tracker,
vote.validator_index);
return bad_tx(tvc);
}
validator_index_tracker = vote.validator_index;
2019-08-14 06:52:51 +02:00
}
2023-04-13 15:50:13 +02:00
if (!bounds_check_validator_index(quorum, vote.validator_index, &vvc))
return bad_tx(tvc);
2023-04-13 15:50:13 +02:00
if (++validator_set[vote.validator_index] > 1) {
vvc.m_duplicate_voters = true;
log::info(logcat, "Voter quorum index is duplicated: {}", vote.validator_index);
return bad_tx(tvc);
}
2023-04-13 15:50:13 +02:00
crypto::public_key const& key = quorum.validators[vote.validator_index];
if (!crypto::check_signature(hash, key, vote.signature)) {
log::info(logcat, "Invalid signature for voter {}/{}", vote.validator_index, key);
vvc.m_signature_not_valid = true;
return bad_tx(tvc);
}
}
return true;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
bool verify_quorum_signatures(
service_nodes::quorum const& quorum,
service_nodes::quorum_type type,
hf hf_version,
uint64_t height,
crypto::hash const& hash,
std::vector<quorum_signature> const& signatures,
const cryptonote::block* block) {
bool enforce_vote_ordering = true;
constexpr size_t MAX_QUORUM_SIZE =
std::max(CHECKPOINT_QUORUM_SIZE, PULSE_QUORUM_NUM_VALIDATORS);
std::array<size_t, MAX_QUORUM_SIZE> unique_vote_set = {};
2023-04-13 15:50:13 +02:00
switch (type) {
default:
assert(!"Invalid Code Path");
break;
// TODO(oxen): DRY quorum verification with state change obligations.
case quorum_type::checkpointing: {
if (signatures.size() < service_nodes::CHECKPOINT_MIN_VOTES) {
log::info(
logcat,
"Checkpoint has insufficient signatures to be considered at height: {}",
height);
return false;
}
if (signatures.size() > service_nodes::CHECKPOINT_QUORUM_SIZE) {
log::info(
logcat,
"Checkpoint has too many signatures to be considered at height: {}",
height);
return false;
}
enforce_vote_ordering = hf_version >= hf::hf13_enforce_checkpoints;
} break;
case quorum_type::pulse: {
if (signatures.size() != PULSE_BLOCK_REQUIRED_SIGNATURES) {
log::info(
logcat,
"Pulse block has {} signatures but requires {}",
signatures.size(),
PULSE_BLOCK_REQUIRED_SIGNATURES);
return false;
}
if (!block) {
log::info(
logcat, "Internal Error: Wrong type passed in any object, expected block.");
return false;
}
if (block->pulse.validator_bitset >= (1 << PULSE_QUORUM_NUM_VALIDATORS)) {
auto mask = std::bitset<sizeof(pulse_validator_bit_mask()) * 8>(
pulse_validator_bit_mask());
auto other = std::bitset<sizeof(pulse_validator_bit_mask()) * 8>(
block->pulse.validator_bitset);
log::info(
logcat,
"Pulse block specifies validator participation bits out of bounds. "
"Expected the bit mask: {}, block: {}",
mask.to_string(),
other.to_string());
return false;
}
} break;
}
2023-04-13 15:50:13 +02:00
for (size_t i = 0; i < signatures.size(); i++) {
service_nodes::quorum_signature const& quorum_signature = signatures[i];
if (enforce_vote_ordering && i < (signatures.size() - 1)) {
auto curr = signatures[i].voter_index;
auto next = signatures[i + 1].voter_index;
if (curr >= next) {
log::info(
logcat,
"Voters in signatures are not given in ascending order, failed "
"verification at height: {}",
height);
return false;
}
}
2023-04-13 15:50:13 +02:00
if (!bounds_check_validator_index(quorum, quorum_signature.voter_index, nullptr))
return false;
if (type == quorum_type::pulse) {
if (!block) {
log::info(
logcat, "Internal Error: Wrong type passed in any object, expected block.");
return false;
}
uint16_t bit = 1 << quorum_signature.voter_index;
if ((block->pulse.validator_bitset & bit) == 0) {
log::info(
logcat,
"Received pulse signature from validator {} that is not participating in "
"round {}",
static_cast<int>(quorum_signature.voter_index),
static_cast<int>(block->pulse.round));
return false;
}
}
2023-04-13 15:50:13 +02:00
crypto::public_key const& key = quorum.validators[quorum_signature.voter_index];
if (quorum_signature.voter_index >= unique_vote_set.size()) {
log::info(
logcat,
"Internal Error: Voter Index indexes out of bounds of the vote set, index: "
"{}vote set size: {}",
quorum_signature.voter_index,
unique_vote_set.size());
return false;
}
2023-04-13 15:50:13 +02:00
if (unique_vote_set[quorum_signature.voter_index]++) {
log::info(
logcat,
"Voter: {}, quorum index is duplicated: {}, failed verification at height: {}",
tools::type_to_hex(key),
quorum_signature.voter_index,
height);
return false;
}
2023-04-13 15:50:13 +02:00
if (!crypto::check_signature(hash, key, quorum_signature.signature)) {
log::info(
logcat,
"Incorrect signature for vote, failed verification at height: {} for voter: "
"{}\n{}",
height,
key,
quorum);
return false;
}
}
2023-04-13 15:50:13 +02:00
return true;
}
2023-04-13 15:50:13 +02:00
bool verify_pulse_quorum_sizes(service_nodes::quorum const& quorum) {
bool result =
quorum.workers.size() == 1 && quorum.validators.size() == PULSE_QUORUM_NUM_VALIDATORS;
return result;
}
2023-04-13 15:50:13 +02:00
bool verify_checkpoint(
hf hf_version,
cryptonote::checkpoint_t const& checkpoint,
service_nodes::quorum const& quorum) {
if (checkpoint.type == cryptonote::checkpoint_type::service_node) {
if ((checkpoint.height % service_nodes::CHECKPOINT_INTERVAL) != 0) {
log::info(
logcat,
"Checkpoint given but not expecting a checkpoint at height: {}",
checkpoint.height);
return false;
}
2023-04-13 15:50:13 +02:00
if (!verify_quorum_signatures(
quorum,
quorum_type::checkpointing,
hf_version,
checkpoint.height,
checkpoint.block_hash,
checkpoint.signatures)) {
log::info(
logcat,
"Checkpoint failed signature validation at block {} {}",
checkpoint.height,
checkpoint.block_hash);
return false;
}
} else {
if (checkpoint.signatures.size() != 0) {
log::info(
logcat,
"Non service-node checkpoints should have no signatures, checkpoint failed at "
"height: {}",
checkpoint.height);
return false;
}
}
return true;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
quorum_vote_t make_state_change_vote(
uint64_t block_height,
uint16_t validator_index,
uint16_t worker_index,
new_state state,
uint16_t reason,
const service_node_keys& keys) {
quorum_vote_t result = {};
result.type = quorum_type::obligations;
result.block_height = block_height;
result.group = quorum_group::validator;
result.index_in_group = validator_index;
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
result.state_change.worker_index = worker_index;
2023-04-13 15:50:13 +02:00
result.state_change.state = state;
result.state_change.reason = reason;
result.signature = make_signature_from_vote(result, keys);
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
quorum_vote_t make_checkpointing_vote(
hf hf_version,
crypto::hash const& block_hash,
uint64_t block_height,
uint16_t index_in_quorum,
const service_node_keys& keys) {
quorum_vote_t result = {};
result.type = quorum_type::checkpointing;
result.checkpoint.block_hash = block_hash;
2023-04-13 15:50:13 +02:00
result.block_height = block_height;
result.group = quorum_group::validator;
result.index_in_group = index_in_quorum;
result.signature = make_signature_from_vote(result, keys);
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
cryptonote::checkpoint_t make_empty_service_node_checkpoint(
crypto::hash const& block_hash, uint64_t height) {
cryptonote::checkpoint_t result = {};
2023-04-13 15:50:13 +02:00
result.type = cryptonote::checkpoint_type::service_node;
result.height = height;
result.block_hash = block_hash;
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
bool verify_vote_age(
const quorum_vote_t& vote,
uint64_t latest_height,
cryptonote::vote_verification_context& vvc) {
bool result = true;
bool height_in_buffer = false;
2023-04-13 15:50:13 +02:00
if (latest_height > vote.block_height + VOTE_LIFETIME) {
height_in_buffer = latest_height <=
vote.block_height + (VOTE_LIFETIME + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER);
log::info(
logcat,
"Received vote for height: {}, is older than: {} blocks and has been rejected.",
vote.block_height,
VOTE_LIFETIME);
vvc.m_invalid_block_height = true;
} else if (vote.block_height > latest_height) {
height_in_buffer = vote.block_height <= latest_height + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER;
log::info(
logcat,
"Received vote for height: {}, is newer than: {} (latest block height) and has "
"been rejected.",
vote.block_height,
latest_height);
vvc.m_invalid_block_height = true;
}
2023-04-13 15:50:13 +02:00
if (vvc.m_invalid_block_height) {
vvc.m_verification_failed = !height_in_buffer;
result = false;
}
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
bool verify_vote_signature(
hf hf_version,
const quorum_vote_t& vote,
cryptonote::vote_verification_context& vvc,
const service_nodes::quorum& quorum) {
bool result = true;
2023-04-13 15:50:13 +02:00
if (vote.type > tools::enum_top<quorum_type>) {
vvc.m_invalid_vote_type = true;
result = false;
}
2023-04-13 15:50:13 +02:00
if (vote.group > quorum_group::worker || vote.group < quorum_group::validator) {
vvc.m_incorrect_voting_group = true;
result = false;
}
if (!result)
2023-04-13 15:50:13 +02:00
return result;
if (vote.group == quorum_group::validator)
2023-04-13 15:50:13 +02:00
result = bounds_check_validator_index(quorum, vote.index_in_group, &vvc);
else
2023-04-13 15:50:13 +02:00
result = bounds_check_worker_index(quorum, vote.index_in_group, &vvc);
if (!result)
2023-04-13 15:50:13 +02:00
return result;
Overhaul and fix crypto::{public_key,ec_point,etc.} types - Remove implicit `operator bool` from ec_point/public_key/etc. which was causing all sorts of implicit conversion mess and bugs. - Change ec_point/public_key/etc. to use a `std::array<unsigned char, 32>` (via a base type) rather than a C-array of char that has to be reinterpret_cast<>'ed all over the place. - Add methods to ec_point/public_key/etc. that make it work more like a container of bytes (`.data()`, `.size()`, `operator[]`, `begin()`, `end()`). - Make a generic `crypto::null<T>` that is a constexpr all-0 `T`, rather than the mishmash `crypto::null_hash`, crypto::null_pkey, crypto::hash::null(), and so on. - Replace three metric tons of `crypto::hash blahblah = crypto::null_hash;` with the much simpler `crypto::hash blahblah{};`, because there's no need to make a copy of a null hash in all these cases. (Likewise for a few other null_whatevers). - Remove a whole bunch of `if (blahblah == crypto::null_hash)` and `if (blahblah != crypto::null_hash)` with the more concise `if (!blahblah)` and `if (blahblah)` (which are fine via the newly *explicit* bool conversion operators). - `crypto::signature` becomes a 64-byte container (as above) but with `c()` and `r()` to get the c() and r() data pointers. (Previously `.c` and `.r` were `ec_scalar`s). - Delete with great prejudice CRYPTO_MAKE_COMPARABLE and CRYPTO_MAKE_HASHABLE and all the other utter trash in `crypto/generic-ops.h`. - De-inline functions in very common crypto/*.h files so that they don't have to get compiled 300 times. - Remove the disgusting include-a-C-header-inside-a-C++-namespace garbage from some crypto headers trying to be both a C and *different* C++ header at once. - Remove the toxic, disgusting, shameful `operator&` on ec_scalar, etc. that replace `&x` with `reinterpret_cast x into an unsigned char*`. This was pure toxic waste. - changed some `<<` outputs to fmt - Random other small changes encountered while fixing everything that cascaded out of the above changes.
2022-10-15 03:22:44 +02:00
crypto::public_key key{};
crypto::hash hash{};
2023-04-13 15:50:13 +02:00
switch (vote.type) {
default: {
log::info(logcat, "Unhandled vote type with value: {}", (int)vote.type);
assert("Unhandled vote type" == 0);
return false;
};
case quorum_type::obligations: {
if (vote.group != quorum_group::validator) {
log::info(
logcat,
"Vote received specifies incorrect voting group, expected vote from "
"validator");
vvc.m_incorrect_voting_group = true;
result = false;
} else {
key = quorum.validators[vote.index_in_group];
hash = make_state_change_vote_hash(
vote.block_height, vote.state_change.worker_index, vote.state_change.state);
result = bounds_check_worker_index(quorum, vote.state_change.worker_index, &vvc);
}
} break;
case quorum_type::checkpointing: {
if (vote.group != quorum_group::validator) {
log::info(logcat, "Vote received specifies incorrect voting group");
vvc.m_incorrect_voting_group = true;
result = false;
} else {
key = quorum.validators[vote.index_in_group];
hash = vote.checkpoint.block_hash;
}
} break;
}
if (!result)
2023-04-13 15:50:13 +02:00
return result;
result = crypto::check_signature(hash, key, vote.signature);
if (result)
2023-04-13 15:50:13 +02:00
log::debug(
logcat,
"Signature accepted for {} voter {}/{} voting for worker {} at height {}",
vote.type,
vote.index_in_group,
key,
(vote.type == quorum_type::obligations
? " voting for worker " + std::to_string(vote.state_change.worker_index)
: ""),
vote.block_height);
else
2023-04-13 15:50:13 +02:00
vvc.m_signature_not_valid = true;
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
template <typename T>
static std::vector<pool_vote_entry>* find_vote_in_pool(
std::vector<T>& pool, const quorum_vote_t& vote, bool create) {
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
T typed_vote{vote};
auto it = std::find(pool.begin(), pool.end(), typed_vote);
if (it != pool.end())
2023-04-13 15:50:13 +02:00
return &it->votes;
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if (!create)
2023-04-13 15:50:13 +02:00
return nullptr;
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
pool.push_back(std::move(typed_vote));
return &pool.back().votes;
2023-04-13 15:50:13 +02:00
}
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
2023-04-13 15:50:13 +02:00
std::vector<pool_vote_entry>* voting_pool::find_vote_pool(
const quorum_vote_t& find_vote, bool create_if_not_found) {
switch (find_vote.type) {
default:
log::info(logcat, "Unhandled find_vote type with value: {}", (int)find_vote.type);
assert("Unhandled find_vote type" == 0);
return nullptr;
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
2023-04-13 15:50:13 +02:00
case quorum_type::obligations:
return find_vote_in_pool(m_obligations_pool, find_vote, create_if_not_found);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
2023-04-13 15:50:13 +02:00
case quorum_type::checkpointing:
return find_vote_in_pool(m_checkpoint_pool, find_vote, create_if_not_found);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
}
2023-04-13 15:50:13 +02:00
}
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
2023-04-13 15:50:13 +02:00
void voting_pool::set_relayed(const std::vector<quorum_vote_t>& votes) {
std::unique_lock lock{m_lock};
const time_t now = time(NULL);
2023-04-13 15:50:13 +02:00
for (const quorum_vote_t& find_vote : votes) {
std::vector<pool_vote_entry>* vote_pool = find_vote_pool(find_vote);
2023-04-13 15:50:13 +02:00
if (vote_pool) // We found the group that this vote belongs to
{
2023-04-13 15:50:13 +02:00
auto vote = std::find_if(
vote_pool->begin(),
vote_pool->end(),
[&find_vote](pool_vote_entry const& entry) {
return (find_vote.index_in_group == entry.vote.index_in_group);
});
if (vote != vote_pool->end()) {
vote->time_last_sent_p2p = now;
}
}
}
2023-04-13 15:50:13 +02:00
}
template <typename T>
static void append_relayable_votes(
std::vector<quorum_vote_t>& result,
const T& pool,
const uint64_t max_last_sent,
uint64_t min_height) {
for (const auto& pool_entry : pool)
for (const auto& vote_entry : pool_entry.votes)
if (vote_entry.vote.block_height >= min_height &&
vote_entry.time_last_sent_p2p <= max_last_sent)
result.push_back(vote_entry.vote);
}
std::vector<quorum_vote_t> voting_pool::get_relayable_votes(
uint64_t height, hf hf_version, bool quorum_relay) const {
std::unique_lock lock{m_lock};
2023-04-13 15:50:13 +02:00
// TODO(doyle): Rate-limiting: A better threshold value that follows suite with transaction
// relay time back-off
constexpr uint64_t TIME_BETWEEN_RELAY = 60 * 2;
const uint64_t max_last_sent = static_cast<uint64_t>(time(nullptr)) - TIME_BETWEEN_RELAY;
const uint64_t min_height = height > VOTE_LIFETIME ? height - VOTE_LIFETIME : 0;
std::vector<quorum_vote_t> result;
if (quorum_relay && hf_version < hf::hf14_blink)
2023-04-13 15:50:13 +02:00
return result; // no quorum relaying before HF14
if (hf_version < hf::hf14_blink || quorum_relay)
2023-04-13 15:50:13 +02:00
append_relayable_votes(result, m_obligations_pool, max_last_sent, min_height);
if (hf_version < hf::hf14_blink || !quorum_relay)
2023-04-13 15:50:13 +02:00
append_relayable_votes(result, m_checkpoint_pool, max_last_sent, min_height);
return result;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
// return: True if the vote was unique
static bool add_vote_to_pool_if_unique(
std::vector<pool_vote_entry>& votes, quorum_vote_t const& vote) {
2019-08-14 06:52:51 +02:00
auto vote_it = std::lower_bound(
2023-04-13 15:50:13 +02:00
votes.begin(),
votes.end(),
vote,
[](pool_vote_entry const& pool_entry, quorum_vote_t const& vote) {
assert(pool_entry.vote.group == vote.group);
return pool_entry.vote.index_in_group < vote.index_in_group;
});
if (vote_it == votes.end() || vote_it->vote.index_in_group != vote.index_in_group) {
votes.insert(vote_it, {vote});
return true;
}
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
std::vector<pool_vote_entry> voting_pool::add_pool_vote_if_unique(
const quorum_vote_t& vote, cryptonote::vote_verification_context& vvc) {
std::unique_lock lock{m_lock};
2023-04-13 15:50:13 +02:00
auto* votes = find_vote_pool(vote, /*create_if_not_found=*/true);
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if (!votes)
2023-04-13 15:50:13 +02:00
return {};
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
vvc.m_added_to_pool = add_vote_to_pool_if_unique(*votes, vote);
return *votes;
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
void voting_pool::remove_used_votes(std::vector<cryptonote::transaction> const& txs, hf version) {
// TODO(doyle): Cull checkpoint votes
std::unique_lock lock{m_lock};
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if (m_obligations_pool.empty())
2023-04-13 15:50:13 +02:00
return;
2023-04-13 15:50:13 +02:00
for (const auto& tx : txs) {
if (tx.type != cryptonote::txtype::state_change)
continue;
2023-04-13 15:50:13 +02:00
cryptonote::tx_extra_service_node_state_change state_change;
if (!get_service_node_state_change_from_tx_extra(tx.extra, state_change, version)) {
log::error(logcat, "Could not get state change from tx, possibly corrupt tx");
continue;
}
2023-04-13 15:50:13 +02:00
auto it = std::find(m_obligations_pool.begin(), m_obligations_pool.end(), state_change);
2023-04-13 15:50:13 +02:00
if (it != m_obligations_pool.end())
m_obligations_pool.erase(it);
}
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
template <typename T>
static void cull_votes(std::vector<T>& vote_pool, uint64_t min_height, uint64_t max_height) {
for (auto it = vote_pool.begin(); it != vote_pool.end();) {
const T& pool_entry = *it;
if (pool_entry.height < min_height || pool_entry.height > max_height)
it = vote_pool.erase(it);
else
++it;
}
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
void voting_pool::remove_expired_votes(uint64_t height) {
std::unique_lock lock{m_lock};
uint64_t min_height = (height < VOTE_LIFETIME) ? 0 : height - VOTE_LIFETIME;
cull_votes(m_obligations_pool, min_height, height);
cull_votes(m_checkpoint_pool, min_height, height);
2023-04-13 15:50:13 +02:00
}
2023-04-13 15:50:13 +02:00
bool voting_pool::received_checkpoint_vote(uint64_t height, size_t index_in_quorum) const {
auto pool_it = std::find_if(
m_checkpoint_pool.begin(),
m_checkpoint_pool.end(),
[height](checkpoint_pool_entry const& entry) { return entry.height == height; });
if (pool_it == m_checkpoint_pool.end())
2023-04-13 15:50:13 +02:00
return false;
2023-04-13 15:50:13 +02:00
for (auto it = pool_it->votes.begin(); it != pool_it->votes.end(); it++) {
if (it->vote.index_in_group == index_in_quorum)
return true;
}
return false;
2023-04-13 15:50:13 +02:00
}
RPC overhaul High-level details: This redesigns the RPC layer to make it much easier to work with, decouples it from an embedded HTTP server, and gets the vast majority of the RPC serialization and dispatch code out of a very commonly included header. There is unfortunately rather a lot of interconnected code here that cannot be easily separated out into separate commits. The full details of what happens here are as follows: Major details: - All of the RPC code is now in a `cryptonote::rpc` namespace; this renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME` becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already working in the `rpc` namespace). - `core_rpc_server` is now completely decoupled from providing any request protocol: it is now *just* the core RPC call handler. - The HTTP RPC interface now lives in a new rpc/http_server.h; this code handles listening for HTTP requests and dispatching them to core_rpc_server, then sending the results back to the caller. - There is similarly a rpc/lmq_server.h for LMQ RPC code; more details on this (and other LMQ specifics) below. - RPC implementing code now returns the response object and throws when things go wrong which simplifies much of the rpc error handling. They can throw anything; generic exceptions get logged and a generic "internal error" message gets returned to the caller, but there is also an `rpc_error` class to return an error code and message used by some json-rpc commands. - RPC implementing functions now overload `core_rpc_server::invoke` following the pattern: RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context); This overloading makes the code vastly simpler: all instantiations are now done with a small amount of generic instantiation code in a single .cpp rather than needing to go to hell and back with a nest of epee macros in a core header. - each RPC endpoint is now defined by the RPC types themselves, including its accessible names and permissions, in core_rpc_server_commands_defs.h: - every RPC structure now has a static `names()` function that returns the names by which the end point is accessible. (The first one is the primary, the others are for deprecated aliases). - RPC command wrappers define their permissions and type by inheriting from special tag classes: - rpc::RPC_COMMAND is a basic, admin-only, JSON command, available via JSON RPC. *All* JSON commands are now available via JSON RPC, instead of the previous mix of some being at /foo and others at /json_rpc. (Ones that were previously at /foo are still there for backwards compatibility; see `rpc::LEGACY` below). - rpc::PUBLIC specifies that the command should be available via a restricted RPC connection. - rpc::BINARY specifies that the command is not JSON, but rather is accessible as /name and takes and returns values in the magic epee binary "portable storage" (lol) data format. - rpc::LEGACY specifies that the command should be available via the non-json-rpc interface at `/name` for backwards compatibility (in addition to the JSON-RPC interface). - some epee serialization got unwrapped and de-templatized so that it can be moved into a .cpp file with just declarations in the .h. (This makes a *huge* difference for core_rpc_server_commands_defs.h and for every compilation unit that includes it which previously had to compile all the serialization code and then throw all by one copy away at link time). This required some new macros so as to not break a ton of places that will use the old way putting everything in the headers; The RPC code uses this as does a few other places; there are comments in contrib/epee/include/serialization/keyvalue_serialization.h as to how to use it. - Detemplatized a bunch of epee/storages code. Most of it should have have been using templates at all (because it can only ever be called with one type!), and now it isn't. This broke some things that didn't properly compile because of missing headers or (in one case) a messed up circular dependency. - Significantly simplified a bunch of over-templatized serialization code. - All RPC serialization definitions is now out of core_rpc_server_commands_defs.h and into a single .cpp file (core_rpc_server_commands_defs.cpp). - core RPC no longer uses the disgusting BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design that forced slamming tons of code into a common header that didn't need to be there. - epee::struct_init is gone. It was a horrible hack that instiated multiple templates just so the coder could be so lazy and write `some_type var;` instead of properly value initializing with `some_type var{};`. - Removed a bunch of useless crap from epee. In particular, forcing extra template instantiations all over the place in order to nest return objects inside JSON RPC values is no longer needed, as are a bunch of stuff related to the above de-macroization of the code. - get_all_service_nodes, get_service_nodes, and get_n_service_nodes are now combined into a single `get_service_nodes` (with deprecated aliases for the others), which eliminates a fair amount of duplication. The biggest obstacle here was getting the requested fields reference passed through: this is now done by a new ability to stash a context in the serialization object that can be retrieved by a sub-serialized type. LMQ-specifics: - The LokiMQ instance moves into `cryptonote::core` rather than being inside cryptonote_protocol. Currently the instance is used both for qnet and rpc calls (and so needs to be in a common place), but I also intend future PRs to use the batching code for job processing (replacing the current threaded job queue). - rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue. Unlike http_server it isn't technically running the whole LMQ stack from here, but the parallel name with http_server seemed appropriate. - All RPC endpoints are supported by LMQ under the same names as defined generically, but prefixed with `rpc.` for public commands and `admin.` for restricted ones. - service node keys are now always available, even when not running in `--service-node` mode: this is because we want the x25519 key for being able to offer CURVE encryption for lmq RPC end-points, and because it doesn't hurt to have them available all the time. In the RPC layer this is now called "get_service_keys" (with "get_service_node_key" as an alias) since they aren't strictly only for service nodes. This also means code needs to check m_service_node, and not m_service_node_keys, to tell if it is running as a service node. (This is also easier to notice because m_service_node_keys got renamed to `m_service_keys`). - Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and `sub.mempool` subscribes the connection for new block and new mempool TX notifications. The latter can notify on just blink txes, or all new mempool txes (but only new ones -- txes dumped from a block don't trigger it). The client gets pushed a [`notify.block`, `height`, `hash`] or [`notify.tx`, `txhash`, `blob`] message when something arrives. Minor details: - rpc::version_t is now a {major,minor} pair. Forcing everyone to pack and unpack a uint32_t was gross. - Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...). (This immediately revealed a couple of bugs in the RPC code that was assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because the macro allows implicit conversion to a char). - De-templatizing useless templates in epee (i.e. a bunch of templated types that were never invoked with different types) revealed a painful circular dependency between epee and non-epee code for tor_address and i2p_address. This crap is now handled in a suitably named `net/epee_network_address_hack.cpp` hack because it really isn't trivial to extricate this mess. - Removed `epee/include/serialization/serialize_base.h`. Amazingly the code somehow still all works perfectly with this previously vital header removed. - Removed bitrotted, unused epee "crypted_storage" and "gzipped_inmemstorage" code. - Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with LOKI_DEFERs. The epee version involves quite a bit more instantiation and is ugly as sin. Also made the `loki::defer` class invokable for some edge cases that need calling before destruction in particular conditions. - Moved the systemd code around; it makes much more sense to do the systemd started notification as in daemon.cpp as late as possible rather than in core (when we can still have startup failures, e.g. if the RPC layer can't start). - Made the systemd short status string available in the get_info RPC (and no longer require building with systemd). - during startup, print (only) the x25519 when not in SN mode, and continue to print all three when in SN mode. - DRYed out some RPC implementation code (such as set_limit) - Made wallet_rpc stop using a raw m_wallet pointer
2020-04-28 01:25:43 +02:00
2023-04-13 15:50:13 +02:00
KV_SERIALIZE_MAP_CODE_BEGIN(quorum_vote_t)
KV_SERIALIZE(version)
KV_SERIALIZE_ENUM(type)
KV_SERIALIZE(block_height)
KV_SERIALIZE_ENUM(group)
KV_SERIALIZE(index_in_group)
KV_SERIALIZE_VAL_POD_AS_BLOB(signature)
if (this_ref.type == quorum_type::checkpointing) {
KV_SERIALIZE_VAL_POD_AS_BLOB_N(checkpoint.block_hash, "checkpoint")
} else {
KV_SERIALIZE(state_change.worker_index)
KV_SERIALIZE_ENUM(state_change.state)
RPC overhaul High-level details: This redesigns the RPC layer to make it much easier to work with, decouples it from an embedded HTTP server, and gets the vast majority of the RPC serialization and dispatch code out of a very commonly included header. There is unfortunately rather a lot of interconnected code here that cannot be easily separated out into separate commits. The full details of what happens here are as follows: Major details: - All of the RPC code is now in a `cryptonote::rpc` namespace; this renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME` becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already working in the `rpc` namespace). - `core_rpc_server` is now completely decoupled from providing any request protocol: it is now *just* the core RPC call handler. - The HTTP RPC interface now lives in a new rpc/http_server.h; this code handles listening for HTTP requests and dispatching them to core_rpc_server, then sending the results back to the caller. - There is similarly a rpc/lmq_server.h for LMQ RPC code; more details on this (and other LMQ specifics) below. - RPC implementing code now returns the response object and throws when things go wrong which simplifies much of the rpc error handling. They can throw anything; generic exceptions get logged and a generic "internal error" message gets returned to the caller, but there is also an `rpc_error` class to return an error code and message used by some json-rpc commands. - RPC implementing functions now overload `core_rpc_server::invoke` following the pattern: RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context); This overloading makes the code vastly simpler: all instantiations are now done with a small amount of generic instantiation code in a single .cpp rather than needing to go to hell and back with a nest of epee macros in a core header. - each RPC endpoint is now defined by the RPC types themselves, including its accessible names and permissions, in core_rpc_server_commands_defs.h: - every RPC structure now has a static `names()` function that returns the names by which the end point is accessible. (The first one is the primary, the others are for deprecated aliases). - RPC command wrappers define their permissions and type by inheriting from special tag classes: - rpc::RPC_COMMAND is a basic, admin-only, JSON command, available via JSON RPC. *All* JSON commands are now available via JSON RPC, instead of the previous mix of some being at /foo and others at /json_rpc. (Ones that were previously at /foo are still there for backwards compatibility; see `rpc::LEGACY` below). - rpc::PUBLIC specifies that the command should be available via a restricted RPC connection. - rpc::BINARY specifies that the command is not JSON, but rather is accessible as /name and takes and returns values in the magic epee binary "portable storage" (lol) data format. - rpc::LEGACY specifies that the command should be available via the non-json-rpc interface at `/name` for backwards compatibility (in addition to the JSON-RPC interface). - some epee serialization got unwrapped and de-templatized so that it can be moved into a .cpp file with just declarations in the .h. (This makes a *huge* difference for core_rpc_server_commands_defs.h and for every compilation unit that includes it which previously had to compile all the serialization code and then throw all by one copy away at link time). This required some new macros so as to not break a ton of places that will use the old way putting everything in the headers; The RPC code uses this as does a few other places; there are comments in contrib/epee/include/serialization/keyvalue_serialization.h as to how to use it. - Detemplatized a bunch of epee/storages code. Most of it should have have been using templates at all (because it can only ever be called with one type!), and now it isn't. This broke some things that didn't properly compile because of missing headers or (in one case) a messed up circular dependency. - Significantly simplified a bunch of over-templatized serialization code. - All RPC serialization definitions is now out of core_rpc_server_commands_defs.h and into a single .cpp file (core_rpc_server_commands_defs.cpp). - core RPC no longer uses the disgusting BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design that forced slamming tons of code into a common header that didn't need to be there. - epee::struct_init is gone. It was a horrible hack that instiated multiple templates just so the coder could be so lazy and write `some_type var;` instead of properly value initializing with `some_type var{};`. - Removed a bunch of useless crap from epee. In particular, forcing extra template instantiations all over the place in order to nest return objects inside JSON RPC values is no longer needed, as are a bunch of stuff related to the above de-macroization of the code. - get_all_service_nodes, get_service_nodes, and get_n_service_nodes are now combined into a single `get_service_nodes` (with deprecated aliases for the others), which eliminates a fair amount of duplication. The biggest obstacle here was getting the requested fields reference passed through: this is now done by a new ability to stash a context in the serialization object that can be retrieved by a sub-serialized type. LMQ-specifics: - The LokiMQ instance moves into `cryptonote::core` rather than being inside cryptonote_protocol. Currently the instance is used both for qnet and rpc calls (and so needs to be in a common place), but I also intend future PRs to use the batching code for job processing (replacing the current threaded job queue). - rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue. Unlike http_server it isn't technically running the whole LMQ stack from here, but the parallel name with http_server seemed appropriate. - All RPC endpoints are supported by LMQ under the same names as defined generically, but prefixed with `rpc.` for public commands and `admin.` for restricted ones. - service node keys are now always available, even when not running in `--service-node` mode: this is because we want the x25519 key for being able to offer CURVE encryption for lmq RPC end-points, and because it doesn't hurt to have them available all the time. In the RPC layer this is now called "get_service_keys" (with "get_service_node_key" as an alias) since they aren't strictly only for service nodes. This also means code needs to check m_service_node, and not m_service_node_keys, to tell if it is running as a service node. (This is also easier to notice because m_service_node_keys got renamed to `m_service_keys`). - Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and `sub.mempool` subscribes the connection for new block and new mempool TX notifications. The latter can notify on just blink txes, or all new mempool txes (but only new ones -- txes dumped from a block don't trigger it). The client gets pushed a [`notify.block`, `height`, `hash`] or [`notify.tx`, `txhash`, `blob`] message when something arrives. Minor details: - rpc::version_t is now a {major,minor} pair. Forcing everyone to pack and unpack a uint32_t was gross. - Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...). (This immediately revealed a couple of bugs in the RPC code that was assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because the macro allows implicit conversion to a char). - De-templatizing useless templates in epee (i.e. a bunch of templated types that were never invoked with different types) revealed a painful circular dependency between epee and non-epee code for tor_address and i2p_address. This crap is now handled in a suitably named `net/epee_network_address_hack.cpp` hack because it really isn't trivial to extricate this mess. - Removed `epee/include/serialization/serialize_base.h`. Amazingly the code somehow still all works perfectly with this previously vital header removed. - Removed bitrotted, unused epee "crypted_storage" and "gzipped_inmemstorage" code. - Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with LOKI_DEFERs. The epee version involves quite a bit more instantiation and is ugly as sin. Also made the `loki::defer` class invokable for some edge cases that need calling before destruction in particular conditions. - Moved the systemd code around; it makes much more sense to do the systemd started notification as in daemon.cpp as late as possible rather than in core (when we can still have startup failures, e.g. if the RPC layer can't start). - Made the systemd short status string available in the get_info RPC (and no longer require building with systemd). - during startup, print (only) the x25519 when not in SN mode, and continue to print all three when in SN mode. - DRYed out some RPC implementation code (such as set_limit) - Made wallet_rpc stop using a raw m_wallet pointer
2020-04-28 01:25:43 +02:00
}
2023-04-13 15:50:13 +02:00
KV_SERIALIZE_MAP_CODE_END()
} // namespace service_nodes
namespace cryptonote {
KV_SERIALIZE_MAP_CODE_BEGIN(NOTIFY_NEW_SERVICE_NODE_VOTE::request)
KV_SERIALIZE(votes)
KV_SERIALIZE_MAP_CODE_END()
} // namespace cryptonote