2018-06-29 06:47:00 +02:00
// Copyright (c) 2018, The Loki Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# include <functional>
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
# include <random>
2018-11-16 00:32:56 +01:00
# include <algorithm>
2019-08-02 03:07:37 +02:00
# include <chrono>
2018-07-12 03:58:17 +02:00
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
# include <boost/endian/conversion.hpp>
2018-06-29 06:47:00 +02:00
# include "ringct/rctSigs.h"
# include "wallet/wallet2.h"
# include "cryptonote_tx_utils.h"
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
# include "cryptonote_basic/tx_extra.h"
2018-12-11 03:46:35 +01:00
# include "int-util.h"
2018-07-21 03:27:13 +02:00
# include "common/scoped_message_writer.h"
# include "common/i18n.h"
2019-07-15 10:08:52 +02:00
# include "common/util.h"
2019-05-31 03:06:42 +02:00
# include "blockchain.h"
2018-12-21 01:33:05 +01:00
# include "service_node_quorum_cop.h"
2018-06-29 06:47:00 +02:00
# include "service_node_list.h"
2018-11-15 03:22:14 +01:00
# include "service_node_rules.h"
2019-04-08 04:03:47 +02:00
# include "service_node_swarm.h"
2019-06-27 09:05:44 +02:00
# include "version.h"
2018-06-29 06:47:00 +02:00
# undef LOKI_DEFAULT_LOG_CATEGORY
# define LOKI_DEFAULT_LOG_CATEGORY "service_nodes"
namespace service_nodes
{
2019-08-02 03:07:37 +02:00
size_t constexpr STORE_LONG_TERM_STATE_INTERVAL = 10000 ;
2019-08-30 05:54:49 +02:00
static uint64_t short_term_state_cull_height ( uint8_t hf_version , cryptonote : : BlockchainDB const * db , uint64_t block_height )
2019-08-27 05:28:46 +02:00
{
size_t constexpr DEFAULT_SHORT_TERM_STATE_HISTORY = 6 * STATE_CHANGE_TX_LIFETIME_IN_BLOCKS ;
uint64_t result =
( block_height < DEFAULT_SHORT_TERM_STATE_HISTORY ) ? 0 : block_height - DEFAULT_SHORT_TERM_STATE_HISTORY ;
2019-09-13 05:13:40 +02:00
if ( hf_version > = cryptonote : : network_version_13_enforce_checkpoints )
2019-09-06 06:29:23 +02:00
{
uint64_t latest_height = db - > height ( ) - 1 ;
cryptonote : : checkpoint_t checkpoint ;
if ( db - > get_immutable_checkpoint ( & checkpoint , latest_height ) )
result = std : : min ( result , checkpoint . height - 1 ) ;
}
2019-08-27 05:28:46 +02:00
return result ;
}
2019-06-26 06:00:05 +02:00
static int get_min_service_node_info_version_for_hf ( uint8_t hf_version )
2019-01-25 04:15:52 +01:00
{
2019-09-06 06:29:23 +02:00
return service_node_info : : version_1_add_registration_hf_version ;
2019-01-25 04:15:52 +01:00
}
2019-08-27 05:28:46 +02:00
service_node_list : : service_node_list ( cryptonote : : Blockchain & blockchain )
: m_blockchain ( blockchain )
, m_db ( nullptr )
, m_service_node_pubkey ( nullptr )
, m_store_quorum_history ( 0 )
, m_state_added_to_archive ( false )
{
}
2018-07-12 03:58:17 +02:00
2019-08-02 03:07:37 +02:00
void service_node_list : : rescan_starting_from_curr_state ( bool store_to_disk )
2018-06-29 06:47:00 +02:00
{
2019-08-27 05:28:46 +02:00
if ( m_blockchain . get_current_hard_fork_version ( ) < cryptonote : : network_version_9_service_nodes )
2019-07-22 04:32:21 +02:00
return ;
2019-08-02 03:07:37 +02:00
auto scan_start = std : : chrono : : high_resolution_clock : : now ( ) ;
2019-09-06 06:29:23 +02:00
uint64_t chain_height = m_blockchain . get_current_blockchain_height ( ) ;
uint64_t current_height = chain_height - 1 ;
2019-07-25 06:35:36 +02:00
if ( m_state . height = = current_height )
return ;
2018-08-17 04:25:28 +02:00
2019-07-25 06:35:36 +02:00
MGINFO ( " Recalculating service nodes list, scanning blockchain from height: " < < m_state . height < < " to: " < < current_height ) ;
2018-09-13 10:42:49 +02:00
std : : vector < std : : pair < cryptonote : : blobdata , cryptonote : : block > > blocks ;
2019-08-11 05:46:59 +02:00
std : : vector < cryptonote : : transaction > txs ;
std : : vector < crypto : : hash > missed_txs ;
auto work_start = std : : chrono : : high_resolution_clock : : now ( ) ;
2019-07-15 10:08:52 +02:00
for ( uint64_t i = 0 ; m_state . height < current_height ; i + + )
2018-06-29 06:47:00 +02:00
{
2019-05-12 02:07:52 +02:00
if ( i > 0 & & i % 10 = = 0 )
2019-08-02 03:07:37 +02:00
{
if ( store_to_disk ) store ( ) ;
auto work_end = std : : chrono : : high_resolution_clock : : now ( ) ;
auto duration = std : : chrono : : duration_cast < std : : chrono : : milliseconds > ( work_end - work_start ) ;
MGINFO ( " ... scanning height " < < m_state . height < < " ( " < < duration . count ( ) / 1000.f < < " s) " ) ;
work_start = std : : chrono : : high_resolution_clock : : now ( ) ;
}
2019-05-12 02:07:52 +02:00
2018-09-13 10:42:49 +02:00
blocks . clear ( ) ;
2019-09-06 06:29:23 +02:00
if ( ! m_blockchain . get_blocks ( m_state . height + 1 , 1000 , blocks ) )
2018-06-29 06:47:00 +02:00
{
2019-01-25 04:15:52 +01:00
MERROR ( " Unable to initialize service nodes list " ) ;
2018-06-29 06:47:00 +02:00
return ;
}
2019-08-11 05:46:59 +02:00
if ( blocks . empty ( ) )
break ;
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
2018-06-29 06:47:00 +02:00
for ( const auto & block_pair : blocks )
{
2018-09-13 10:42:49 +02:00
txs . clear ( ) ;
missed_txs . clear ( ) ;
2018-09-19 05:23:56 +02:00
2018-06-29 06:47:00 +02:00
const cryptonote : : block & block = block_pair . second ;
if ( ! m_blockchain . get_transactions ( block . tx_hashes , txs , missed_txs ) )
{
2019-01-25 04:15:52 +01:00
MERROR ( " Unable to get transactions for block " < < block . hash ) ;
2018-06-29 06:47:00 +02:00
return ;
}
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
2019-04-11 07:08:26 +02:00
process_block ( block , txs ) ;
2018-06-29 06:47:00 +02:00
}
}
2019-08-02 03:07:37 +02:00
auto scan_end = std : : chrono : : high_resolution_clock : : now ( ) ;
auto duration = std : : chrono : : duration_cast < std : : chrono : : milliseconds > ( scan_end - scan_start ) ;
MGINFO ( " Done recalculating service nodes list ( " < < duration . count ( ) / 1000.f < < " s) " ) ;
if ( store_to_disk ) store ( ) ;
2018-06-29 06:47:00 +02:00
}
2019-09-20 07:54:00 +02:00
// TODO(loki): Temporary HF13 code, remove when we hit HF13 because we delete all HF12 checkpoints and don't need conditionals for HF12/HF13 checkpointing code
static uint64_t hf13_height ;
2019-07-19 09:33:29 +02:00
void service_node_list : : init ( )
{
2019-09-20 07:54:00 +02:00
// TODO(loki): Temporary HF13 code, remove when we hit HF13 because we delete all HF12 checkpoints and don't need conditionals for HF12/HF13 checkpointing code
hf13_height = m_blockchain . get_earliest_ideal_height_for_version ( cryptonote : : network_version_13_enforce_checkpoints ) ;
2019-07-19 09:33:29 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
if ( m_blockchain . get_current_hard_fork_version ( ) < 9 )
{
reset ( true ) ;
return ;
}
uint64_t current_height = m_blockchain . get_current_blockchain_height ( ) ;
bool loaded = load ( current_height ) ;
if ( loaded & & m_old_quorum_states . size ( ) < std : : min ( m_store_quorum_history , uint64_t { 10 } ) ) {
LOG_PRINT_L0 ( " Full history storage requested, but " < < m_old_quorum_states . size ( ) < < " old quorum states found " ) ;
loaded = false ; // Either we don't have stored history or the history is very short, so recalculation is necessary or cheap.
}
2019-07-25 06:35:36 +02:00
2019-08-02 03:07:37 +02:00
bool store_to_disk = false ;
if ( ! loaded | | m_state . height > current_height )
{
reset ( true ) ;
store_to_disk = true ;
}
rescan_starting_from_curr_state ( store_to_disk ) ;
2019-07-19 09:33:29 +02:00
}
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
template < typename UnaryPredicate >
static std : : vector < service_nodes : : pubkey_and_sninfo > sort_and_filter ( const service_nodes_infos_t & sns_infos , UnaryPredicate p , bool reserve = true ) {
std : : vector < pubkey_and_sninfo > result ;
if ( reserve ) result . reserve ( sns_infos . size ( ) ) ;
for ( const pubkey_and_sninfo & key_info : sns_infos )
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
if ( p ( * key_info . second ) )
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
result . push_back ( key_info ) ;
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
std : : sort ( result . begin ( ) , result . end ( ) ,
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
[ ] ( const pubkey_and_sninfo & a , const pubkey_and_sninfo & b ) {
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
return memcmp ( reinterpret_cast < const void * > ( & a ) , reinterpret_cast < const void * > ( & b ) , sizeof ( a ) ) < 0 ;
} ) ;
return result ;
}
2019-07-15 10:08:52 +02:00
std : : vector < pubkey_and_sninfo > service_node_list : : state_t : : active_service_nodes_infos ( ) const {
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return sort_and_filter ( service_nodes_infos , [ ] ( const service_node_info & info ) { return info . is_active ( ) ; } ) ;
}
2019-07-15 10:08:52 +02:00
std : : vector < pubkey_and_sninfo > service_node_list : : state_t : : decommissioned_service_nodes_infos ( ) const {
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return sort_and_filter ( service_nodes_infos , [ ] ( const service_node_info & info ) { return info . is_decommissioned ( ) & & info . is_fully_funded ( ) ; } , /*reserve=*/ false ) ;
}
2019-07-18 09:42:15 +02:00
std : : shared_ptr < const testing_quorum > service_node_list : : get_testing_quorum ( quorum_type type , uint64_t height , bool include_old , std : : vector < std : : shared_ptr < const testing_quorum > > * alt_quorums ) const
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
{
2019-09-17 04:01:13 +02:00
height = offset_testing_quorum_height ( type , height ) ;
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-08-02 03:07:37 +02:00
quorum_manager const * quorums = nullptr ;
if ( height = = m_state . height )
quorums = & m_state . quorums ;
2019-08-27 05:28:46 +02:00
else // NOTE: Search m_state_history && m_state_archive
2019-08-02 03:07:37 +02:00
{
2019-08-11 05:46:59 +02:00
auto it = m_state_history . find ( height ) ;
if ( it ! = m_state_history . end ( ) )
2019-08-02 03:07:37 +02:00
quorums = & it - > quorums ;
2019-08-27 05:28:46 +02:00
if ( ! quorums )
{
auto it = m_state_archive . find ( height ) ;
if ( it ! = m_state_archive . end ( ) ) quorums = & it - > quorums ;
}
2019-08-02 03:07:37 +02:00
}
if ( ! quorums & & include_old ) // NOTE: Search m_old_quorum_states
{
auto it =
std : : lower_bound ( m_old_quorum_states . begin ( ) ,
m_old_quorum_states . end ( ) ,
height ,
[ ] ( quorums_by_height const & entry , uint64_t height ) { return entry . height < height ; } ) ;
if ( it ! = m_old_quorum_states . end ( ) & & it - > height = = height )
quorums = & it - > quorums ;
2019-07-17 08:01:54 +02:00
}
2019-08-02 03:07:37 +02:00
if ( ! quorums )
return nullptr ;
2019-07-18 09:42:15 +02:00
if ( alt_quorums )
{
for ( std : : pair < crypto : : hash , state_t > const & hash_to_state : m_alt_state )
{
state_t const & alt_state = hash_to_state . second ;
if ( alt_state . height = = height )
{
2019-09-02 12:18:54 +02:00
std : : shared_ptr < const testing_quorum > alt_result = alt_state . quorums . get ( type ) ;
2019-07-18 09:42:15 +02:00
if ( alt_result ) alt_quorums - > push_back ( alt_result ) ;
}
}
}
2019-08-02 03:07:37 +02:00
2019-09-02 12:18:54 +02:00
std : : shared_ptr < const testing_quorum > result = quorums - > get ( type ) ;
2019-07-18 09:42:15 +02:00
return result ;
2018-06-29 06:47:00 +02:00
}
2019-07-18 09:42:15 +02:00
static bool get_pubkey_from_quorum ( testing_quorum const & quorum , quorum_group group , size_t quorum_index , crypto : : public_key & key )
2019-07-04 09:25:02 +02:00
{
std : : vector < crypto : : public_key > const * array = nullptr ;
2019-07-18 09:42:15 +02:00
if ( group = = quorum_group : : validator ) array = & quorum . validators ;
else if ( group = = quorum_group : : worker ) array = & quorum . workers ;
2019-07-04 09:25:02 +02:00
else
{
MERROR ( " Invalid quorum group specified " ) ;
return false ;
}
if ( quorum_index > = array - > size ( ) )
{
MERROR ( " Quorum indexing out of bounds: " < < quorum_index < < " , quorum_size: " < < array - > size ( ) ) ;
return false ;
}
key = ( * array ) [ quorum_index ] ;
return true ;
}
2019-07-18 09:42:15 +02:00
bool service_node_list : : get_quorum_pubkey ( quorum_type type , quorum_group group , uint64_t height , size_t quorum_index , crypto : : public_key & key ) const
{
std : : shared_ptr < const testing_quorum > quorum = get_testing_quorum ( type , height ) ;
if ( ! quorum )
{
LOG_PRINT_L1 ( " Quorum for height: " < < height < < " , was not stored by the daemon " ) ;
return false ;
}
bool result = get_pubkey_from_quorum ( * quorum , group , quorum_index , key ) ;
return result ;
}
2018-08-15 04:59:05 +02:00
std : : vector < service_node_pubkey_info > service_node_list : : get_service_node_list_state ( const std : : vector < crypto : : public_key > & service_node_pubkeys ) const
{
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2018-08-15 04:59:05 +02:00
std : : vector < service_node_pubkey_info > result ;
if ( service_node_pubkeys . empty ( ) )
{
2019-07-15 10:08:52 +02:00
result . reserve ( m_state . service_nodes_infos . size ( ) ) ;
2018-08-15 04:59:05 +02:00
2019-08-11 05:46:59 +02:00
for ( const auto & info : m_state . service_nodes_infos )
result . emplace_back ( info ) ;
2018-08-15 04:59:05 +02:00
}
else
{
result . reserve ( service_node_pubkeys . size ( ) ) ;
for ( const auto & it : service_node_pubkeys )
{
2019-08-11 05:46:59 +02:00
auto find_it = m_state . service_nodes_infos . find ( it ) ;
if ( find_it ! = m_state . service_nodes_infos . end ( ) )
result . emplace_back ( * find_it ) ;
2018-08-15 04:59:05 +02:00
}
}
return result ;
}
2018-10-10 07:24:01 +02:00
void service_node_list : : set_db_pointer ( cryptonote : : BlockchainDB * db )
{
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2018-10-10 07:24:01 +02:00
m_db = db ;
}
void service_node_list : : set_my_service_node_keys ( crypto : : public_key const * pub_key )
{
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2018-10-10 07:24:01 +02:00
m_service_node_pubkey = pub_key ;
}
2019-07-17 08:01:54 +02:00
void service_node_list : : set_quorum_history_storage ( uint64_t hist_size ) {
if ( hist_size = = 1 )
hist_size = std : : numeric_limits < uint64_t > : : max ( ) ;
m_store_quorum_history = hist_size ;
}
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
bool service_node_list : : is_service_node ( const crypto : : public_key & pubkey , bool require_active ) const
2018-07-12 04:07:34 +02:00
{
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-07-15 10:08:52 +02:00
auto it = m_state . service_nodes_infos . find ( pubkey ) ;
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
return it ! = m_state . service_nodes_infos . end ( ) & & ( ! require_active | | it - > second - > is_active ( ) ) ;
2018-07-12 04:07:34 +02:00
}
2019-01-25 04:15:52 +01:00
bool service_node_list : : is_key_image_locked ( crypto : : key_image const & check_image , uint64_t * unlock_height , service_node_info : : contribution_t * the_locked_contribution ) const
2018-06-29 06:47:00 +02:00
{
2019-07-15 10:08:52 +02:00
for ( const auto & pubkey_info : m_state . service_nodes_infos )
2019-01-25 04:15:52 +01:00
{
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
const service_node_info & info = * pubkey_info . second ;
2019-01-25 04:15:52 +01:00
for ( const service_node_info : : contributor_t & contributor : info . contributors )
{
for ( const service_node_info : : contribution_t & contribution : contributor . locked_contributions )
{
if ( check_image = = contribution . key_image )
{
if ( the_locked_contribution ) * the_locked_contribution = contribution ;
if ( unlock_height ) * unlock_height = info . requested_unlock_height ;
return true ;
}
}
}
}
return false ;
2018-06-29 06:47:00 +02:00
}
2019-01-25 04:15:52 +01:00
bool reg_tx_extract_fields ( const cryptonote : : transaction & tx , std : : vector < cryptonote : : account_public_address > & addresses , uint64_t & portions_for_operator , std : : vector < uint64_t > & portions , uint64_t & expiration_timestamp , crypto : : public_key & service_node_key , crypto : : signature & signature )
2018-06-29 06:47:00 +02:00
{
2018-07-18 08:51:26 +02:00
cryptonote : : tx_extra_service_node_register registration ;
2018-07-21 03:27:13 +02:00
if ( ! get_service_node_register_from_tx_extra ( tx . extra , registration ) )
return false ;
2018-07-28 08:27:04 +02:00
if ( ! cryptonote : : get_service_node_pubkey_from_tx_extra ( tx . extra , service_node_key ) )
return false ;
2018-07-21 03:27:13 +02:00
addresses . clear ( ) ;
addresses . reserve ( registration . m_public_spend_keys . size ( ) ) ;
2019-08-11 05:46:59 +02:00
for ( size_t i = 0 ; i < registration . m_public_spend_keys . size ( ) ; i + + ) {
addresses . emplace_back ( ) ;
addresses . back ( ) . m_spend_public_key = registration . m_public_spend_keys [ i ] ;
addresses . back ( ) . m_view_public_key = registration . m_public_view_keys [ i ] ;
}
2018-07-28 08:27:04 +02:00
2018-08-07 04:28:59 +02:00
portions_for_operator = registration . m_portions_for_operator ;
2018-07-31 04:46:12 +02:00
portions = registration . m_portions ;
2018-07-21 03:27:13 +02:00
expiration_timestamp = registration . m_expiration_timestamp ;
signature = registration . m_service_node_signature ;
return true ;
2018-06-29 06:47:00 +02:00
}
2019-09-17 04:01:13 +02:00
uint64_t offset_testing_quorum_height ( quorum_type type , uint64_t height )
{
uint64_t result = height ;
if ( type = = quorum_type : : checkpointing )
{
if ( result < REORG_SAFETY_BUFFER_BLOCKS_POST_HF12 )
return 0 ;
result - = REORG_SAFETY_BUFFER_BLOCKS_POST_HF12 ;
}
return result ;
}
2019-01-25 04:15:52 +01:00
struct parsed_tx_contribution
{
cryptonote : : account_public_address address ;
uint64_t transferred ;
crypto : : secret_key tx_key ;
std : : vector < service_node_info : : contribution_t > locked_contributions ;
} ;
static uint64_t get_reg_tx_staking_output_contribution ( const cryptonote : : transaction & tx , int i , crypto : : key_derivation const & derivation , hw : : device & hwdev )
2018-06-29 06:47:00 +02:00
{
if ( tx . vout [ i ] . target . type ( ) ! = typeid ( cryptonote : : txout_to_key ) )
{
2018-07-18 08:51:26 +02:00
return 0 ;
2018-06-29 06:47:00 +02:00
}
rct : : key mask ;
uint64_t money_transferred = 0 ;
crypto : : secret_key scalar1 ;
hwdev . derivation_to_scalar ( derivation , i , scalar1 ) ;
try
{
switch ( tx . rct_signatures . type )
{
case rct : : RCTTypeSimple :
2018-10-08 04:34:36 +02:00
case rct : : RCTTypeBulletproof :
2019-01-30 04:44:00 +01:00
case rct : : RCTTypeBulletproof2 :
2018-06-29 06:47:00 +02:00
money_transferred = rct : : decodeRctSimple ( tx . rct_signatures , rct : : sk2rct ( scalar1 ) , i , mask , hwdev ) ;
break ;
case rct : : RCTTypeFull :
money_transferred = rct : : decodeRct ( tx . rct_signatures , rct : : sk2rct ( scalar1 ) , i , mask , hwdev ) ;
break ;
default :
2018-08-03 08:52:44 +02:00
LOG_PRINT_L0 ( " Unsupported rct type: " < < tx . rct_signatures . type ) ;
2018-07-18 08:51:26 +02:00
return 0 ;
2018-06-29 06:47:00 +02:00
}
}
catch ( const std : : exception & e )
{
2018-08-03 08:52:44 +02:00
LOG_PRINT_L0 ( " Failed to decode input " < < i ) ;
2018-07-18 08:51:26 +02:00
return 0 ;
2018-06-29 06:47:00 +02:00
}
2018-07-18 08:51:26 +02:00
return money_transferred ;
2018-06-29 06:47:00 +02:00
}
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
/// Makes a copy of the given service_node_info and replaces the shared_ptr with a pointer to the copy.
/// Returns the non-const service_node_info (which is now held by the passed-in shared_ptr lvalue ref).
static service_node_info & duplicate_info ( std : : shared_ptr < const service_node_info > & info_ptr ) {
auto new_ptr = std : : make_shared < service_node_info > ( * info_ptr ) ;
info_ptr = new_ptr ;
return * new_ptr ;
}
2019-07-18 09:42:15 +02:00
bool service_node_list : : state_t : : process_state_change_tx ( std : : set < state_t > const & state_history ,
std : : unordered_map < crypto : : hash , state_t > const & alt_states ,
cryptonote : : network_type nettype ,
const cryptonote : : block & block ,
const cryptonote : : transaction & tx ,
crypto : : public_key const * my_pubkey )
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
{
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if ( tx . type ! = cryptonote : : txtype : : state_change )
2018-11-16 00:32:56 +01:00
return false ;
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
2019-07-18 09:42:15 +02:00
uint8_t const hf_version = block . major_version ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
cryptonote : : tx_extra_service_node_state_change state_change ;
2019-08-30 05:54:49 +02:00
if ( ! cryptonote : : get_service_node_state_change_from_tx_extra ( tx . extra , state_change , hf_version ) )
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
{
2019-07-18 09:42:15 +02:00
LOG_PRINT_L1 ( " Transaction: " < < cryptonote : : get_transaction_hash ( tx ) < < " , did not have valid state change data in tx extra rejecting malformed tx " ) ;
return false ;
}
auto it = state_history . find ( state_change . block_height ) ;
if ( it = = state_history . end ( ) )
2019-08-28 07:54:02 +02:00
{
LOG_PRINT_L1 ( " Transaction: " < < cryptonote : : get_transaction_hash ( tx ) < < " , references quorum at height: " < < cryptonote : : get_block_hash ( block ) < < " , that is not stored " ) ;
2019-07-18 09:42:15 +02:00
return false ;
2019-08-28 07:54:02 +02:00
}
2019-07-18 09:42:15 +02:00
quorum_manager const * quorums = & it - > quorums ;
cryptonote : : tx_verification_context tvc = { } ;
if ( ! verify_tx_state_change (
state_change , cryptonote : : get_block_height ( block ) , tvc , * quorums - > obligations , hf_version ) )
{
quorums = nullptr ;
for ( std : : pair < crypto : : hash , state_t > const & entry : alt_states )
{
state_t const & alt_state = entry . second ;
if ( alt_state . height ! = state_change . block_height ) continue ;
quorums = & alt_state . quorums ;
if ( ! verify_tx_state_change ( state_change , cryptonote : : get_block_height ( block ) , tvc , * quorums - > obligations , hf_version ) )
{
quorums = nullptr ;
continue ;
}
}
}
if ( ! quorums )
{
LOG_PRINT_L1 ( " Could not get a quorum that could completely validate the votes from state change in tx: " < < get_transaction_hash ( tx ) < < " , skipping transaction " ) ;
2018-11-16 00:32:56 +01:00
return false ;
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
}
2019-07-04 09:25:02 +02:00
crypto : : public_key key ;
2019-07-18 09:42:15 +02:00
if ( ! get_pubkey_from_quorum ( * quorums - > obligations , quorum_group : : worker , state_change . service_node_index , key ) )
{
LOG_PRINT_L1 ( " Retrieving the public key from state change in tx: " < < cryptonote : : get_transaction_hash ( tx ) < < " failed " ) ;
2018-11-16 00:32:56 +01:00
return false ;
2019-07-18 09:42:15 +02:00
}
2018-07-18 08:51:26 +02:00
2019-07-18 09:42:15 +02:00
auto iter = service_nodes_infos . find ( key ) ;
if ( iter = = service_nodes_infos . end ( ) ) {
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
LOG_PRINT_L2 ( " Received state change tx for non-registered service node " < < key < < " (perhaps a delayed tx?) " ) ;
2018-11-16 00:32:56 +01:00
return false ;
2018-09-27 10:31:08 +02:00
}
2019-07-18 09:42:15 +02:00
uint64_t block_height = cryptonote : : get_block_height ( block ) ;
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
auto & info = duplicate_info ( iter - > second ) ;
2019-07-18 09:42:15 +02:00
bool is_me = my_pubkey & & * my_pubkey = = key ;
2018-11-16 00:32:56 +01:00
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
switch ( state_change . state ) {
case new_state : : deregister :
if ( is_me )
MGINFO_RED ( " Deregistration for service node (yours): " < < key ) ;
else
LOG_PRINT_L1 ( " Deregistration for service node: " < < key ) ;
2019-08-30 05:54:49 +02:00
if ( hf_version > = cryptonote : : network_version_11_infinite_staking )
2019-01-25 04:15:52 +01:00
{
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
for ( const auto & contributor : info . contributors )
{
for ( const auto & contribution : contributor . locked_contributions )
{
2019-09-06 06:33:07 +02:00
key_image_blacklist . emplace_back ( ) ; // NOTE: Use default value for version in key_image_blacklist_entry
key_image_blacklist_entry & entry = key_image_blacklist . back ( ) ;
entry . key_image = contribution . key_image ;
entry . unlock_height = block_height + staking_num_lock_blocks ( nettype ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
}
}
2019-01-25 04:15:52 +01:00
}
2019-07-18 09:42:15 +02:00
service_nodes_infos . erase ( iter ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return true ;
case new_state : : decommission :
2019-08-30 05:54:49 +02:00
if ( hf_version < cryptonote : : network_version_12_checkpointing ) {
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
MERROR ( " Invalid decommission transaction seen before network v12 " ) ;
return false ;
}
if ( info . is_decommissioned ( ) ) {
LOG_PRINT_L2 ( " Received decommission tx for already-decommissioned service node " < < key < < " ; ignoring " ) ;
return false ;
}
if ( is_me )
MGINFO_RED ( " Temporary decommission for service node (yours): " < < key ) ;
else
LOG_PRINT_L1 ( " Temporary decommission for service node: " < < key ) ;
info . active_since_height = - info . active_since_height ;
info . last_decommission_height = block_height ;
info . decommission_count + + ;
2019-07-11 06:21:27 +02:00
2019-09-10 02:04:44 +02:00
if ( hf_version > = cryptonote : : network_version_13_enforce_checkpoints ) {
2019-08-22 05:14:25 +02:00
// Assigning invalid swarm id effectively kicks the node off
// its current swarm; it will be assigned a new swarm id when it
// gets recommissioned. Prior to HF13 this step was incorrectly
// skipped.
info . swarm_id = UNASSIGNED_SWARM_ID ;
}
Fix 4.0.4 uptime proof transmission after recomm
When a node gets recommissioned in 4.0.4 we reset its timestamp to the
current time to delay obligations checks for newly recommissioned nodes,
but this reset caused problems:
- the code runs not only when a fresh block is received, but also when
syncing or rescanning, and so time(NULL) gets used to update the
node's timestamp even if it is an old record, and since proof info is
shared across states, the affects the current state.
- as a result of the above, a just-rescanned node that has been
decommissioned at some point in the past will think it has just sent a
proof, and so won't send any proofs for an hour.
- A just-rescanned node won't accept or relay any proofs for any node
that was recommissioned in its scan for the first half an hour, but this
lack of relaying can cause chaos in getting uptime proofs out across the
network, especially while we still have 4.0.3 nodes that need it.
To address the first issue, this switches the recommissioning to use the
block timestamp rather than the current timestamp. This *will* be
slightly delayed in the case of current blocks (since a block timestamp
is the time the pool *started* working on the block, which is generally
the time the previous block was found on the network), but even with an
exceptionally long block delay (e.g. 20 minutes) we are still fending
off obligations checks for 1h40m.
That would partially fix issues 2 and 3, but we actually don't want a
recommissioning to look like a received uptime proof for a couple
reasons:
- When we haven't actually received an uptime proof it's confusing to
report that we have (at the recommission time) and may mask an
underlying issue of a node that isn't actually sending proofs for some
reason (which might be more common for a node that has just been
decommissioned/recommissioned). There's also a related weird state
here for nodes that have come on recently: they think the SN is
active, but have 0's for IP and storage server port.
- 4.0.3 nodes don't get the updated timestamp and so really need the
proof to come through even when the 4.0.4 nodes don't think it's
important/acceptable.
So to also fix these, this commits adds an "effective_timestamp" to the
proof info: if it is larger than the actual timestamp field, we use it
instead of the actual one for obligations checking. On a recommission,
we update only the effective field so that we can delay obligations
checking for a couple of hours without delaying actual proofs info going
over the network.
2019-08-16 19:31:58 +02:00
info . proof - > update_timestamp ( 0 ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return true ;
case new_state : : recommission :
2019-08-30 05:54:49 +02:00
if ( hf_version < cryptonote : : network_version_12_checkpointing ) {
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
MERROR ( " Invalid recommission transaction seen before network v12 " ) ;
return false ;
}
if ( ! info . is_decommissioned ( ) ) {
LOG_PRINT_L2 ( " Received recommission tx for already-active service node " < < key < < " ; ignoring " ) ;
return false ;
}
if ( is_me )
MGINFO_GREEN ( " Recommission for service node (yours): " < < key ) ;
else
LOG_PRINT_L1 ( " Recommission for service node: " < < key ) ;
2019-06-26 05:58:38 +02:00
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
info . active_since_height = block_height ;
2019-06-26 05:58:38 +02:00
// Move the SN at the back of the list as if it had just registered (or just won)
info . last_reward_block_height = block_height ;
info . last_reward_transaction_index = std : : numeric_limits < uint32_t > : : max ( ) ;
2019-08-12 01:48:47 +02:00
// NOTE: Only the quorum deciding on this node agrees that the service
// node has a recent uptime atleast for it to be recommissioned not
// necessarily the entire network. Ensure the entire network agrees
// simultaneously they are online if we are recommissioning by resetting
Fix 4.0.4 uptime proof transmission after recomm
When a node gets recommissioned in 4.0.4 we reset its timestamp to the
current time to delay obligations checks for newly recommissioned nodes,
but this reset caused problems:
- the code runs not only when a fresh block is received, but also when
syncing or rescanning, and so time(NULL) gets used to update the
node's timestamp even if it is an old record, and since proof info is
shared across states, the affects the current state.
- as a result of the above, a just-rescanned node that has been
decommissioned at some point in the past will think it has just sent a
proof, and so won't send any proofs for an hour.
- A just-rescanned node won't accept or relay any proofs for any node
that was recommissioned in its scan for the first half an hour, but this
lack of relaying can cause chaos in getting uptime proofs out across the
network, especially while we still have 4.0.3 nodes that need it.
To address the first issue, this switches the recommissioning to use the
block timestamp rather than the current timestamp. This *will* be
slightly delayed in the case of current blocks (since a block timestamp
is the time the pool *started* working on the block, which is generally
the time the previous block was found on the network), but even with an
exceptionally long block delay (e.g. 20 minutes) we are still fending
off obligations checks for 1h40m.
That would partially fix issues 2 and 3, but we actually don't want a
recommissioning to look like a received uptime proof for a couple
reasons:
- When we haven't actually received an uptime proof it's confusing to
report that we have (at the recommission time) and may mask an
underlying issue of a node that isn't actually sending proofs for some
reason (which might be more common for a node that has just been
decommissioned/recommissioned). There's also a related weird state
here for nodes that have come on recently: they think the SN is
active, but have 0's for IP and storage server port.
- 4.0.3 nodes don't get the updated timestamp and so really need the
proof to come through even when the 4.0.4 nodes don't think it's
important/acceptable.
So to also fix these, this commits adds an "effective_timestamp" to the
proof info: if it is larger than the actual timestamp field, we use it
instead of the actual one for obligations checking. On a recommission,
we update only the effective field so that we can delay obligations
checking for a couple of hours without delaying actual proofs info going
over the network.
2019-08-16 19:31:58 +02:00
// the failure conditions. We set only the effective but not *actual*
// timestamp so that we delay obligations checks but don't prevent the
// next actual proof from being sent/relayed.
info . proof - > effective_timestamp = block . timestamp ;
2019-09-17 04:01:13 +02:00
info . proof - > votes . fill ( { } ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return true ;
2019-06-26 06:01:40 +02:00
case new_state : : ip_change_penalty :
2019-08-30 05:54:49 +02:00
if ( hf_version < cryptonote : : network_version_12_checkpointing ) {
2019-06-26 06:01:40 +02:00
MERROR ( " Invalid ip_change_penalty transaction seen before network v12 " ) ;
return false ;
}
if ( info . is_decommissioned ( ) ) {
LOG_PRINT_L2 ( " Received reset position tx for service node " < < key < < " but it is already decommissioned; ignoring " ) ;
return false ;
}
if ( is_me )
MGINFO_RED ( " Reward position reset for service node (yours): " < < key ) ;
else
LOG_PRINT_L1 ( " Reward position reset for service node: " < < key ) ;
// Move the SN at the back of the list as if it had just registered (or just won)
info . last_reward_block_height = block_height ;
info . last_reward_transaction_index = std : : numeric_limits < uint32_t > : : max ( ) ;
info . last_ip_change_height = block_height ;
2019-06-27 09:05:44 +02:00
return true ;
2019-06-26 06:01:40 +02:00
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
default :
// dev bug!
MERROR ( " BUG: Service node state change tx has unknown state " < < static_cast < uint16_t > ( state_change . state ) ) ;
return false ;
}
2018-11-16 00:32:56 +01:00
}
2019-07-18 09:42:15 +02:00
bool service_node_list : : state_t : : process_key_image_unlock_tx ( cryptonote : : network_type nettype , uint64_t block_height , const cryptonote : : transaction & tx )
{
crypto : : public_key snode_key ;
if ( ! cryptonote : : get_service_node_pubkey_from_tx_extra ( tx . extra , snode_key ) )
return false ;
2018-11-16 00:32:56 +01:00
2019-07-18 09:42:15 +02:00
auto it = service_nodes_infos . find ( snode_key ) ;
if ( it = = service_nodes_infos . end ( ) )
return false ;
2018-11-16 00:32:56 +01:00
2019-07-18 09:42:15 +02:00
const service_node_info & node_info = * it - > second ;
if ( node_info . requested_unlock_height ! = KEY_IMAGE_AWAITING_UNLOCK_HEIGHT )
{
LOG_PRINT_L1 ( " Unlock TX: Node already requested an unlock at height: "
< < node_info . requested_unlock_height < < " rejected on height: " < < block_height
< < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
return false ;
}
2018-11-16 00:32:56 +01:00
2019-07-18 09:42:15 +02:00
cryptonote : : tx_extra_tx_key_image_unlock unlock ;
if ( ! cryptonote : : get_tx_key_image_unlock_from_tx_extra ( tx . extra , unlock ) )
{
LOG_PRINT_L1 ( " Unlock TX: Didn't have key image unlock in the tx_extra, rejected on height: "
< < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
return false ;
}
2018-11-16 00:32:56 +01:00
2019-07-18 09:42:15 +02:00
uint64_t unlock_height = get_locked_key_image_unlock_height ( nettype , node_info . registration_height , block_height ) ;
for ( const auto & contributor : node_info . contributors )
{
auto cit = std : : find_if ( contributor . locked_contributions . begin ( ) ,
contributor . locked_contributions . end ( ) ,
[ & unlock ] ( const service_node_info : : contribution_t & contribution ) {
return unlock . key_image = = contribution . key_image ;
} ) ;
if ( cit ! = contributor . locked_contributions . end ( ) )
{
// NOTE(loki): This should be checked in blockchain check_tx_inputs already
crypto : : hash const hash = service_nodes : : generate_request_stake_unlock_hash ( unlock . nonce ) ;
if ( crypto : : check_signature ( hash , cit - > key_image_pub_key , unlock . signature ) )
{
duplicate_info ( it - > second ) . requested_unlock_height = unlock_height ;
return true ;
}
else
{
LOG_PRINT_L1 ( " Unlock TX: Couldn't verify key image unlock in the tx_extra, rejected on height: "
< < block_height < < " for tx: " < < get_transaction_hash ( tx ) ) ;
return false ;
}
2018-11-16 00:32:56 +01:00
}
}
2019-07-18 09:42:15 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2019-08-28 07:54:02 +02:00
static bool get_contribution ( cryptonote : : network_type nettype , int hf_version , const cryptonote : : transaction & tx , uint64_t block_height , parsed_tx_contribution & parsed_contribution )
2019-01-25 04:15:52 +01:00
{
if ( ! cryptonote : : get_service_node_contributor_from_tx_extra ( tx . extra , parsed_contribution . address ) )
return false ;
if ( ! cryptonote : : get_tx_secret_key_from_tx_extra ( tx . extra , parsed_contribution . tx_key ) )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: There was a service node contributor but no secret key in the tx extra on height: " < < block_height < < " for tx: " < < get_transaction_hash ( tx ) ) ;
2019-01-25 04:15:52 +01:00
return false ;
}
crypto : : key_derivation derivation ;
if ( ! crypto : : generate_key_derivation ( parsed_contribution . address . m_view_public_key , parsed_contribution . tx_key , derivation ) )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: Failed to generate key derivation on height: " < < block_height < < " for tx: " < < get_transaction_hash ( tx ) ) ;
2019-01-25 04:15:52 +01:00
return false ;
}
hw : : device & hwdev = hw : : get_device ( " default " ) ;
parsed_contribution . transferred = 0 ;
2019-08-30 05:54:49 +02:00
if ( hf_version > = cryptonote : : network_version_11_infinite_staking )
2019-01-25 04:15:52 +01:00
{
cryptonote : : tx_extra_tx_key_image_proofs key_image_proofs ;
if ( ! get_tx_key_image_proofs_from_tx_extra ( tx . extra , key_image_proofs ) )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: Didn't have key image proofs in the tx_extra, rejected on height: " < < block_height < < " for tx: " < < get_transaction_hash ( tx ) ) ;
2019-01-25 04:15:52 +01:00
return false ;
}
for ( size_t output_index = 0 ; output_index < tx . vout . size ( ) ; + + output_index )
{
uint64_t transferred = get_reg_tx_staking_output_contribution ( tx , output_index , derivation , hwdev ) ;
if ( transferred = = 0 )
continue ;
crypto : : public_key ephemeral_pub_key ;
{
if ( ! hwdev . derive_public_key ( derivation , output_index , parsed_contribution . address . m_spend_public_key , ephemeral_pub_key ) )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: Could not derive TX ephemeral key on height: " < < block_height < < " for tx: " < < get_transaction_hash ( tx ) < < " for output: " < < output_index ) ;
2019-01-25 04:15:52 +01:00
continue ;
}
const auto & out_to_key = boost : : get < cryptonote : : txout_to_key > ( tx . vout [ output_index ] . target ) ;
if ( out_to_key . key ! = ephemeral_pub_key )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: Derived TX ephemeral key did not match tx stored key on height: " < < block_height < < " for tx: " < < get_transaction_hash ( tx ) < < " for output: " < < output_index ) ;
2019-01-25 04:15:52 +01:00
continue ;
}
}
crypto : : public_key const * ephemeral_pub_key_ptr = & ephemeral_pub_key ;
for ( auto proof = key_image_proofs . proofs . begin ( ) ; proof ! = key_image_proofs . proofs . end ( ) ; proof + + )
{
if ( ! crypto : : check_ring_signature ( ( const crypto : : hash & ) ( proof - > key_image ) , proof - > key_image , & ephemeral_pub_key_ptr , 1 , & proof - > signature ) )
continue ;
2019-08-11 05:46:59 +02:00
parsed_contribution . locked_contributions . emplace_back (
service_node_info : : version_0_checkpointing ,
ephemeral_pub_key ,
proof - > key_image ,
transferred
) ;
2019-01-25 04:15:52 +01:00
parsed_contribution . transferred + = transferred ;
key_image_proofs . proofs . erase ( proof ) ;
break ;
}
}
}
else
{
for ( size_t i = 0 ; i < tx . vout . size ( ) ; i + + )
{
bool has_correct_unlock_time = false ;
{
uint64_t unlock_time = tx . unlock_time ;
2019-06-11 20:53:46 +02:00
if ( tx . version > = cryptonote : : txversion : : v3_per_output_unlock_times )
2019-01-25 04:15:52 +01:00
unlock_time = tx . output_unlock_times [ i ] ;
2019-03-13 06:35:02 +01:00
uint64_t min_height = block_height + staking_num_lock_blocks ( nettype ) ;
has_correct_unlock_time = unlock_time < CRYPTONOTE_MAX_BLOCK_NUMBER & & unlock_time > = min_height ;
2019-01-25 04:15:52 +01:00
}
if ( has_correct_unlock_time )
parsed_contribution . transferred + = get_reg_tx_staking_output_contribution ( tx , i , derivation , hwdev ) ;
}
}
2018-11-16 00:32:56 +01:00
2019-01-25 04:15:52 +01:00
return true ;
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
}
2019-07-18 09:42:15 +02:00
bool is_registration_tx ( cryptonote : : network_type nettype , uint8_t hf_version , const cryptonote : : transaction & tx , uint64_t block_timestamp , uint64_t block_height , uint32_t index , crypto : : public_key & key , service_node_info & info )
2018-06-29 06:47:00 +02:00
{
2019-01-25 04:15:52 +01:00
crypto : : public_key service_node_key ;
2018-07-18 08:51:26 +02:00
std : : vector < cryptonote : : account_public_address > service_node_addresses ;
2018-08-18 06:36:16 +02:00
std : : vector < uint64_t > service_node_portions ;
uint64_t portions_for_operator ;
2018-07-21 03:27:13 +02:00
uint64_t expiration_timestamp ;
crypto : : signature signature ;
2018-07-18 08:51:26 +02:00
2019-01-25 04:15:52 +01:00
if ( ! reg_tx_extract_fields ( tx , service_node_addresses , portions_for_operator , service_node_portions , expiration_timestamp , service_node_key , signature ) )
2018-07-18 08:51:26 +02:00
return false ;
2018-08-07 04:28:59 +02:00
if ( service_node_portions . size ( ) ! = service_node_addresses . size ( ) | | service_node_portions . empty ( ) )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Extracted portions size: ( " < < service_node_portions . size ( ) < <
" ) was empty or did not match address size: ( " < < service_node_addresses . size ( ) < <
" ) on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-08-07 04:28:59 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-07-21 03:27:13 +02:00
2019-01-22 00:44:30 +01:00
if ( ! check_service_node_portions ( hf_version , service_node_portions ) ) return false ;
2018-06-29 06:47:00 +02:00
2018-08-07 04:28:59 +02:00
if ( portions_for_operator > STAKING_PORTIONS )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Operator portions: " < < portions_for_operator < <
" exceeded staking portions: " < < STAKING_PORTIONS < <
" on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-08-07 04:28:59 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-08-07 04:28:59 +02:00
2018-08-06 15:08:44 +02:00
// check the signature is all good
2018-07-21 03:27:13 +02:00
crypto : : hash hash ;
2018-08-07 04:28:59 +02:00
if ( ! get_registration_hash ( service_node_addresses , portions_for_operator , service_node_portions , expiration_timestamp , hash ) )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Failed to extract registration hash, on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-07-21 03:27:13 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-08-14 08:12:42 +02:00
if ( ! crypto : : check_key ( service_node_key ) | | ! crypto : : check_signature ( hash , service_node_key , signature ) )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Has invalid key and/or signature, on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-07-21 03:27:13 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-07-21 03:27:13 +02:00
if ( expiration_timestamp < block_timestamp )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Has expired. The block timestamp: " < < block_timestamp < <
" is greater than the expiration timestamp: " < < expiration_timestamp < <
" on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-07-12 04:07:34 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-06-29 06:47:00 +02:00
2018-08-06 15:08:44 +02:00
// check the initial contribution exists
2019-07-18 09:42:15 +02:00
info . staking_requirement = get_staking_requirement ( nettype , block_height , hf_version ) ;
2018-08-06 15:08:44 +02:00
cryptonote : : account_public_address address ;
2019-01-25 04:15:52 +01:00
parsed_tx_contribution parsed_contribution = { } ;
2019-07-18 09:42:15 +02:00
if ( ! get_contribution ( nettype , hf_version , tx , block_height , parsed_contribution ) )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Had service node registration fields, but could not decode contribution on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-08-06 15:08:44 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
const uint64_t min_transfer = get_min_node_contribution ( hf_version , info . staking_requirement , info . total_reserved , info . total_num_locked_contributions ( ) ) ;
2019-01-25 04:15:52 +01:00
if ( parsed_contribution . transferred < min_transfer )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Contribution transferred: " < < parsed_contribution . transferred < < " didn't meet the minimum transfer requirement: " < < min_transfer < < " on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-08-06 15:08:44 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
size_t total_num_of_addr = service_node_addresses . size ( ) ;
if ( std : : find ( service_node_addresses . begin ( ) , service_node_addresses . end ( ) , parsed_contribution . address ) = = service_node_addresses . end ( ) )
total_num_of_addr + + ;
if ( total_num_of_addr > MAX_NUMBER_OF_CONTRIBUTORS )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: Number of participants: " < < total_num_of_addr < <
" exceeded the max number of contributors: " < < MAX_NUMBER_OF_CONTRIBUTORS < <
" on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-07-28 08:27:04 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-07-28 08:27:04 +02:00
2018-08-06 15:08:44 +02:00
// don't actually process this contribution now, do it when we fall through later.
2018-07-28 08:27:04 +02:00
key = service_node_key ;
2019-09-06 06:33:07 +02:00
info . operator_address = service_node_addresses [ 0 ] ;
info . portions_for_operator = portions_for_operator ;
info . registration_height = block_height ;
info . registration_hf_version = hf_version ;
info . last_reward_block_height = block_height ;
2018-07-28 08:27:04 +02:00
info . last_reward_transaction_index = index ;
2019-09-06 06:33:07 +02:00
info . active_since_height = 0 ;
info . last_decommission_height = 0 ;
info . decommission_count = 0 ;
info . total_contributed = 0 ;
info . total_reserved = 0 ;
info . swarm_id = UNASSIGNED_SWARM_ID ;
info . proof - > public_ip = 0 ;
info . proof - > storage_port = 0 ;
info . last_ip_change_height = block_height ;
info . version = get_min_service_node_info_version_for_hf ( hf_version ) ;
2018-11-16 00:32:56 +01:00
2018-08-06 15:08:44 +02:00
info . contributors . clear ( ) ;
for ( size_t i = 0 ; i < service_node_addresses . size ( ) ; i + + )
{
2018-08-07 05:21:56 +02:00
// Check for duplicates
auto iter = std : : find ( service_node_addresses . begin ( ) , service_node_addresses . begin ( ) + i , service_node_addresses [ i ] ) ;
2018-08-07 06:27:13 +02:00
if ( iter ! = service_node_addresses . begin ( ) + i )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Register TX: There was a duplicate participant for service node on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
2018-08-06 15:08:44 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-08-06 15:08:44 +02:00
uint64_t hi , lo , resulthi , resultlo ;
lo = mul128 ( info . staking_requirement , service_node_portions [ i ] , & hi ) ;
2018-08-18 06:36:16 +02:00
div128_64 ( hi , lo , STAKING_PORTIONS , & resulthi , & resultlo ) ;
2019-01-25 04:15:52 +01:00
2019-08-11 05:46:59 +02:00
info . contributors . emplace_back ( ) ;
auto & contributor = info . contributors . back ( ) ;
2019-01-25 04:15:52 +01:00
contributor . reserved = resultlo ;
contributor . address = service_node_addresses [ i ] ;
2018-08-20 06:15:31 +02:00
info . total_reserved + = resultlo ;
2018-08-06 15:08:44 +02:00
}
2018-07-28 08:27:04 +02:00
return true ;
}
2019-07-18 09:42:15 +02:00
bool service_node_list : : state_t : : process_registration_tx ( cryptonote : : network_type nettype , const cryptonote : : block & block , const cryptonote : : transaction & tx , uint32_t index , crypto : : public_key const * my_pubkey )
2018-07-28 08:27:04 +02:00
{
2019-07-18 09:42:15 +02:00
uint8_t const hf_version = block . major_version ;
uint64_t const block_timestamp = block . timestamp ;
uint64_t const block_height = cryptonote : : get_block_height ( block ) ;
2018-07-28 08:27:04 +02:00
crypto : : public_key key ;
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
auto info_ptr = std : : make_shared < service_node_info > ( ) ;
service_node_info & info = * info_ptr ;
2019-07-18 09:42:15 +02:00
if ( ! is_registration_tx ( nettype , hf_version , tx , block_timestamp , block_height , index , key , info ) )
2018-11-16 00:32:56 +01:00
return false ;
2018-07-28 08:27:04 +02:00
2019-08-30 05:54:49 +02:00
if ( hf_version > = cryptonote : : network_version_11_infinite_staking )
2018-10-16 01:32:35 +02:00
{
2019-01-25 04:15:52 +01:00
// NOTE(loki): Grace period is not used anymore with infinite staking. So, if someone somehow reregisters, we just ignore it
2019-07-18 09:42:15 +02:00
const auto iter = service_nodes_infos . find ( key ) ;
if ( iter ! = service_nodes_infos . end ( ) )
2019-01-25 04:15:52 +01:00
return false ;
2018-10-16 01:32:35 +02:00
2019-07-18 09:42:15 +02:00
if ( my_pubkey & & * my_pubkey = = key ) MGINFO_GREEN ( " Service node registered (yours): " < < key < < " on height: " < < block_height ) ;
else LOG_PRINT_L1 ( " New service node registered: " < < key < < " on height: " < < block_height ) ;
2019-01-25 04:15:52 +01:00
}
else
{
// NOTE: A node doesn't expire until registration_height + lock blocks excess now which acts as the grace period
// So it is possible to find the node still in our list.
bool registered_during_grace_period = false ;
2019-07-18 09:42:15 +02:00
const auto iter = service_nodes_infos . find ( key ) ;
if ( iter ! = service_nodes_infos . end ( ) )
2018-10-16 01:32:35 +02:00
{
2019-08-30 05:54:49 +02:00
if ( hf_version > = cryptonote : : network_version_10_bulletproofs )
2019-01-25 04:15:52 +01:00
{
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
service_node_info const & old_info = * iter - > second ;
2019-07-18 09:42:15 +02:00
uint64_t expiry_height = old_info . registration_height + staking_num_lock_blocks ( nettype ) ;
2019-01-25 04:15:52 +01:00
if ( block_height < expiry_height )
return false ;
// NOTE: Node preserves its position in list if it reregisters during grace period.
registered_during_grace_period = true ;
info . last_reward_block_height = old_info . last_reward_block_height ;
info . last_reward_transaction_index = old_info . last_reward_transaction_index ;
}
else
{
return false ;
}
2018-10-16 01:32:35 +02:00
}
2018-07-28 08:27:04 +02:00
2019-07-18 09:42:15 +02:00
if ( my_pubkey & & * my_pubkey = = key )
2018-10-16 01:32:35 +02:00
{
2019-01-25 04:15:52 +01:00
if ( registered_during_grace_period )
{
MGINFO_GREEN ( " Service node re-registered (yours): " < < key < < " at block height: " < < block_height ) ;
}
else
{
MGINFO_GREEN ( " Service node registered (yours): " < < key < < " at block height: " < < block_height ) ;
}
2018-10-16 01:32:35 +02:00
}
else
{
2019-01-25 04:15:52 +01:00
LOG_PRINT_L1 ( " New service node registered: " < < key < < " at block height: " < < block_height ) ;
2018-10-16 01:32:35 +02:00
}
2018-09-27 10:31:08 +02:00
}
2018-07-28 08:27:04 +02:00
2019-07-18 09:42:15 +02:00
service_nodes_infos [ key ] = std : : move ( info_ptr ) ;
2018-08-06 15:08:44 +02:00
return true ;
}
2019-07-18 09:42:15 +02:00
bool service_node_list : : state_t : : process_contribution_tx ( cryptonote : : network_type nettype , const cryptonote : : block & block , const cryptonote : : transaction & tx , uint32_t index )
2018-07-28 08:27:04 +02:00
{
2019-07-18 09:42:15 +02:00
uint64_t const block_height = cryptonote : : get_block_height ( block ) ;
uint8_t const hf_version = block . major_version ;
2018-07-28 08:27:04 +02:00
crypto : : public_key pubkey ;
if ( ! cryptonote : : get_service_node_pubkey_from_tx_extra ( tx . extra , pubkey ) )
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ; // Is not a contribution TX don't need to check it.
2019-01-25 04:15:52 +01:00
parsed_tx_contribution parsed_contribution = { } ;
2019-07-18 09:42:15 +02:00
if ( ! get_contribution ( nettype , hf_version , tx , block_height , parsed_contribution ) )
2019-01-25 04:15:52 +01:00
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: Could not decode contribution for service node: " < < pubkey < < " on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-07-28 08:27:04 +02:00
2019-07-18 09:42:15 +02:00
auto iter = service_nodes_infos . find ( pubkey ) ;
if ( iter = = service_nodes_infos . end ( ) )
2019-01-25 04:15:52 +01:00
{
LOG_PRINT_L1 ( " Contribution TX: Contribution received for service node: " < < pubkey < <
" , but could not be found in the service node list on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) < < " \n "
" This could mean that the service node was deregistered before the contribution was processed. " ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-07-28 08:27:04 +02:00
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
const service_node_info & curinfo = * iter - > second ;
if ( curinfo . is_fully_funded ( ) )
2019-01-25 04:15:52 +01:00
{
LOG_PRINT_L1 ( " Contribution TX: Service node: " < < pubkey < <
" is already fully funded, but contribution received on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
if ( ! cryptonote : : get_tx_secret_key_from_tx_extra ( tx . extra , parsed_contribution . tx_key ) )
{
2019-03-13 06:34:46 +01:00
LOG_PRINT_L1 ( " Contribution TX: Failed to get tx secret key from contribution received on height: " < < block_height < < " for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-07-28 08:27:04 +02:00
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
auto & info = duplicate_info ( iter - > second ) ;
auto & contributors = info . contributors ;
2018-08-06 15:08:44 +02:00
2018-08-07 05:21:56 +02:00
auto contrib_iter = std : : find_if ( contributors . begin ( ) , contributors . end ( ) ,
2019-01-25 04:15:52 +01:00
[ & parsed_contribution ] ( const service_node_info : : contributor_t & contributor ) { return contributor . address = = parsed_contribution . address ; } ) ;
2019-01-22 00:44:30 +01:00
const bool new_contributor = ( contrib_iter = = contributors . end ( ) ) ;
2019-01-25 04:15:52 +01:00
if ( new_contributor )
2018-08-07 05:21:56 +02:00
{
2019-01-25 04:15:52 +01:00
if ( contributors . size ( ) > = MAX_NUMBER_OF_CONTRIBUTORS )
{
LOG_PRINT_L1 ( " Contribution TX: Node is full with max contributors: " < < MAX_NUMBER_OF_CONTRIBUTORS < <
" for service node: " < < pubkey < <
" on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2019-01-22 00:44:30 +01:00
/// Check that the contribution is large enough
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
const uint64_t min_contribution = get_min_node_contribution ( hf_version , info . staking_requirement , info . total_reserved , info . total_num_locked_contributions ( ) ) ;
2019-01-25 04:15:52 +01:00
if ( parsed_contribution . transferred < min_contribution )
{
LOG_PRINT_L1 ( " Contribution TX: Amount " < < parsed_contribution . transferred < <
" did not meet min " < < min_contribution < <
" for service node: " < < pubkey < <
" on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return false ;
2019-01-25 04:15:52 +01:00
}
2018-06-29 06:47:00 +02:00
2019-08-11 05:46:59 +02:00
//
// Successfully Validated
//
2019-09-06 06:33:07 +02:00
contrib_iter = info . contributors . emplace ( contributors . end ( ) ) ;
2019-08-11 05:46:59 +02:00
contrib_iter - > address = parsed_contribution . address ;
2018-08-07 05:21:56 +02:00
}
2018-06-29 06:47:00 +02:00
2019-01-25 04:15:52 +01:00
service_node_info : : contributor_t & contributor = * contrib_iter ;
2018-06-29 06:47:00 +02:00
2018-08-06 15:08:44 +02:00
// In this action, we cannot
// increase total_reserved so much that it is >= staking_requirement
uint64_t can_increase_reserved_by = info . staking_requirement - info . total_reserved ;
2019-01-25 04:15:52 +01:00
uint64_t max_amount = contributor . reserved + can_increase_reserved_by ;
parsed_contribution . transferred = std : : min ( max_amount - contributor . amount , parsed_contribution . transferred ) ;
2018-06-29 06:47:00 +02:00
2019-01-25 04:15:52 +01:00
contributor . amount + = parsed_contribution . transferred ;
info . total_contributed + = parsed_contribution . transferred ;
2018-08-06 15:08:44 +02:00
if ( contributor . amount > contributor . reserved )
{
info . total_reserved + = contributor . amount - contributor . reserved ;
contributor . reserved = contributor . amount ;
}
2018-06-29 06:47:00 +02:00
2019-01-25 04:15:52 +01:00
info . last_reward_block_height = block_height ;
info . last_reward_transaction_index = index ;
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
const size_t max_contributions_per_node = service_nodes : : MAX_KEY_IMAGES_PER_CONTRIBUTOR * MAX_NUMBER_OF_CONTRIBUTORS ;
2019-02-25 08:05:20 +01:00
if ( hf_version > = cryptonote : : network_version_11_infinite_staking )
2019-01-25 04:15:52 +01:00
{
for ( const service_node_info : : contribution_t & contribution : parsed_contribution . locked_contributions )
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
{
if ( info . total_num_locked_contributions ( ) < max_contributions_per_node )
contributor . locked_contributions . push_back ( contribution ) ;
else
{
LOG_PRINT_L1 ( " Contribution TX: Already hit the max number of contributions: " < < max_contributions_per_node < <
2019-07-18 09:42:15 +02:00
" for contributor: " < < cryptonote : : get_account_address_as_str ( nettype , false , contributor . address ) < <
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
" on height: " < < block_height < <
" for tx: " < < cryptonote : : get_transaction_hash ( tx ) ) ;
break ;
}
}
2019-01-25 04:15:52 +01:00
}
2018-07-28 08:27:04 +02:00
2019-01-25 04:15:52 +01:00
LOG_PRINT_L1 ( " Contribution of " < < parsed_contribution . transferred < < " received for service node " < < pubkey ) ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if ( info . is_fully_funded ( ) ) {
info . active_since_height = block_height ;
return true ;
}
return false ;
2018-07-18 08:51:26 +02:00
}
2018-06-29 06:47:00 +02:00
2019-08-28 07:54:02 +02:00
bool service_node_list : : block_added ( const cryptonote : : block & block , const std : : vector < cryptonote : : transaction > & txs , cryptonote : : checkpoint_t const * checkpoint )
2018-07-18 08:51:26 +02:00
{
2019-08-28 07:54:02 +02:00
if ( block . major_version < cryptonote : : network_version_9_service_nodes )
return true ;
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-04-11 07:08:26 +02:00
process_block ( block , txs ) ;
2019-08-28 07:54:02 +02:00
2019-09-13 05:13:40 +02:00
if ( block . major_version > = cryptonote : : network_version_13_enforce_checkpoints & & checkpoint )
2019-08-28 07:54:02 +02:00
{
std : : shared_ptr < const testing_quorum > quorum = get_testing_quorum ( quorum_type : : checkpointing , checkpoint - > height ) ;
if ( ! quorum )
{
LOG_PRINT_L1 ( " Failed to get testing quorum checkpoint for block: " < < cryptonote : : get_block_hash ( block ) ) ;
return false ;
}
if ( ! service_nodes : : verify_checkpoint ( block . major_version , * checkpoint , * quorum ) )
{
LOG_PRINT_L1 ( " Service node checkpoint failed verification for block: " < < cryptonote : : get_block_hash ( block ) ) ;
return false ;
}
}
2018-08-16 05:08:36 +02:00
store ( ) ;
2019-08-28 07:54:02 +02:00
return true ;
2018-06-29 06:47:00 +02:00
}
2019-08-02 03:07:37 +02:00
static std : : vector < size_t > generate_shuffled_service_node_index_list (
size_t list_size ,
crypto : : hash const & block_hash ,
quorum_type type ,
2019-07-18 09:42:15 +02:00
size_t sublist_size = 0 ,
2019-08-02 03:07:37 +02:00
size_t sublist_up_to = 0 )
{
std : : vector < size_t > result ( list_size ) ;
std : : iota ( result . begin ( ) , result . end ( ) , 0 ) ;
uint64_t seed = 0 ;
std : : memcpy ( & seed , block_hash . data , std : : min ( sizeof ( seed ) , sizeof ( block_hash . data ) ) ) ;
boost : : endian : : little_to_native_inplace ( seed ) ;
seed + = static_cast < uint64_t > ( type ) ;
// Shuffle 2
// |=================================|
// | |
// Shuffle 1 |
// |==============| |
// | | | |
// |sublist_size | |
// | | sublist_up_to |
// 0 N Y Z
// [.......................................]
// If we have a list [0,Z) but we need a shuffled sublist of the first N values that only
// includes values from [0,Y) then we do this using two shuffles: first of the [0,Y) sublist,
// then of the [N,Z) sublist (which is already partially shuffled, but that doesn't matter). We
// reuse the same seed for both partial shuffles, but again, that isn't an issue.
if ( ( 0 < sublist_size & & sublist_size < list_size ) & & ( 0 < sublist_up_to & & sublist_up_to < list_size ) ) {
assert ( sublist_size < = sublist_up_to ) ; // Can't select N random items from M items when M < N
loki_shuffle ( result . begin ( ) , result . begin ( ) + sublist_up_to , seed ) ;
loki_shuffle ( result . begin ( ) + sublist_size , result . end ( ) , seed ) ;
}
else {
loki_shuffle ( result . begin ( ) , result . end ( ) , seed ) ;
}
return result ;
}
2019-09-06 06:29:23 +02:00
static quorum_manager generate_quorums ( cryptonote : : network_type nettype , uint8_t hf_version , service_node_list : : state_t const & state )
2019-08-02 03:07:37 +02:00
{
quorum_manager result = { } ;
2019-09-06 06:29:23 +02:00
assert ( state . block_hash ! = crypto : : null_hash ) ;
2019-08-02 03:07:37 +02:00
// The two quorums here have different selection criteria: the entire checkpoint quorum and the
// state change *validators* want only active service nodes, but the state change *workers*
// (i.e. the nodes to be tested) also include decommissioned service nodes. (Prior to v12 there
// are no decommissioned nodes, so this distinction is irrelevant for network concensus).
auto active_snode_list = state . active_service_nodes_infos ( ) ;
decltype ( active_snode_list ) decomm_snode_list ;
if ( hf_version > = cryptonote : : network_version_12_checkpointing )
decomm_snode_list = state . decommissioned_service_nodes_infos ( ) ;
quorum_type const max_quorum_type = max_quorum_type_for_hf ( hf_version ) ;
for ( int type_int = 0 ; type_int < = ( int ) max_quorum_type ; type_int + + )
{
auto type = static_cast < quorum_type > ( type_int ) ;
size_t num_validators = 0 , num_workers = 0 ;
auto quorum = std : : make_shared < testing_quorum > ( ) ;
std : : vector < size_t > pub_keys_indexes ;
if ( type = = quorum_type : : obligations )
{
size_t total_nodes = active_snode_list . size ( ) + decomm_snode_list . size ( ) ;
num_validators = std : : min ( active_snode_list . size ( ) , STATE_CHANGE_QUORUM_SIZE ) ;
2019-09-06 06:29:23 +02:00
pub_keys_indexes = generate_shuffled_service_node_index_list ( total_nodes , state . block_hash , type , num_validators , active_snode_list . size ( ) ) ;
2019-08-02 03:07:37 +02:00
result . obligations = quorum ;
size_t num_remaining_nodes = total_nodes - num_validators ;
num_workers = std : : min ( num_remaining_nodes , std : : max ( STATE_CHANGE_MIN_NODES_TO_TEST , num_remaining_nodes / STATE_CHANGE_NTH_OF_THE_NETWORK_TO_TEST ) ) ;
}
else if ( type = = quorum_type : : checkpointing )
{
// Checkpoint quorums only exist every CHECKPOINT_INTERVAL blocks, but the height that gets
// used to generate the quorum (i.e. the `height` variable here) is actually `H -
// REORG_SAFETY_BUFFER_BLOCKS_POST_HF12`, where H is divisible by CHECKPOINT_INTERVAL, but
// REORG_SAFETY_BUFFER_BLOCKS_POST_HF12 is not (it equals 11). Hence the addition here to
// "undo" the lag before checking to see if we're on an interval multiple:
2019-09-06 06:29:23 +02:00
if ( ( state . height + REORG_SAFETY_BUFFER_BLOCKS_POST_HF12 ) % CHECKPOINT_INTERVAL ! = 0 )
2019-08-02 03:07:37 +02:00
continue ; // Not on an interval multiple: no checkpointing quorum is defined.
size_t total_nodes = active_snode_list . size ( ) ;
// TODO(loki): Soft fork, remove when testnet gets reset
2019-09-06 06:29:23 +02:00
if ( nettype = = cryptonote : : TESTNET & & state . height < 85357 )
2019-08-02 03:07:37 +02:00
total_nodes = active_snode_list . size ( ) + decomm_snode_list . size ( ) ;
2019-09-20 04:46:34 +02:00
// TODO(loki): We can remove after switching to V13 since we delete all V12 and below checkpoints where we introduced this kind of quorum
if ( hf_version > = cryptonote : : network_version_13_enforce_checkpoints & & total_nodes < CHECKPOINT_QUORUM_SIZE )
continue ;
2019-09-06 06:29:23 +02:00
pub_keys_indexes = generate_shuffled_service_node_index_list ( total_nodes , state . block_hash , type ) ;
2019-08-02 03:07:37 +02:00
result . checkpointing = quorum ;
2019-09-20 07:54:00 +02:00
if ( ( state . height + REORG_SAFETY_BUFFER_BLOCKS_POST_HF12 ) > = hf13_height )
num_validators = std : : min ( pub_keys_indexes . size ( ) , CHECKPOINT_QUORUM_SIZE ) ;
else
num_workers = std : : min ( pub_keys_indexes . size ( ) , CHECKPOINT_QUORUM_SIZE ) ;
2019-08-02 03:07:37 +02:00
}
else
{
MERROR ( " Unhandled quorum type enum with value: " < < type_int ) ;
continue ;
}
quorum - > validators . reserve ( num_validators ) ;
quorum - > workers . reserve ( num_workers ) ;
size_t i = 0 ;
for ( ; i < num_validators ; i + + )
{
quorum - > validators . push_back ( active_snode_list [ pub_keys_indexes [ i ] ] . first ) ;
}
for ( ; i < num_validators + num_workers ; i + + )
{
size_t j = pub_keys_indexes [ i ] ;
if ( j < active_snode_list . size ( ) )
quorum - > workers . push_back ( active_snode_list [ j ] . first ) ;
else
quorum - > workers . push_back ( decomm_snode_list [ j - active_snode_list . size ( ) ] . first ) ;
}
}
return result ;
}
2019-09-06 06:33:07 +02:00
void service_node_list : : state_t : : update_from_block ( cryptonote : : BlockchainDB const & db ,
cryptonote : : network_type nettype ,
2019-07-18 09:42:15 +02:00
std : : set < state_t > const & state_history ,
std : : unordered_map < crypto : : hash , state_t > const & alt_states ,
const cryptonote : : block & block ,
const std : : vector < cryptonote : : transaction > & txs ,
crypto : : public_key const * my_pubkey )
2018-06-29 06:47:00 +02:00
{
2019-09-02 05:20:05 +02:00
+ + height ;
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
bool need_swarm_update = false ;
2019-07-18 09:42:15 +02:00
uint64_t block_height = cryptonote : : get_block_height ( block ) ;
assert ( height = = block_height ) ;
2019-09-06 06:29:23 +02:00
quorums = { } ;
block_hash = cryptonote : : get_block_hash ( block ) ;
uint8_t const hf_version = block . major_version ;
2018-06-29 06:47:00 +02:00
2019-01-25 04:15:52 +01:00
//
// Remove expired blacklisted key images
//
2019-09-09 03:14:19 +02:00
if ( hf_version > = cryptonote : : network_version_11_infinite_staking )
2019-01-25 04:15:52 +01:00
{
2019-09-06 06:29:23 +02:00
for ( auto entry = key_image_blacklist . begin ( ) ; entry ! = key_image_blacklist . end ( ) ; )
{
if ( block_height > = entry - > unlock_height )
entry = key_image_blacklist . erase ( entry ) ;
else
entry + + ;
}
2019-01-25 04:15:52 +01:00
}
2018-11-16 00:32:56 +01:00
2019-01-25 04:15:52 +01:00
//
// Expire Nodes
//
2019-09-06 06:33:07 +02:00
for ( const crypto : : public_key & pubkey : get_expired_nodes ( db , nettype , block . major_version , block_height ) )
2018-08-05 05:08:51 +02:00
{
2019-07-18 09:42:15 +02:00
auto i = service_nodes_infos . find ( pubkey ) ;
if ( i ! = service_nodes_infos . end ( ) )
2018-08-05 05:08:51 +02:00
{
2019-07-18 09:42:15 +02:00
if ( my_pubkey & & * my_pubkey = = pubkey )
2018-10-16 01:32:35 +02:00
{
MGINFO_GREEN ( " Service node expired (yours): " < < pubkey < < " at block height: " < < block_height ) ;
}
else
{
LOG_PRINT_L1 ( " Service node expired: " < < pubkey < < " at block height: " < < block_height ) ;
}
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
need_swarm_update + = i - > second - > is_active ( ) ;
2019-07-18 09:42:15 +02:00
service_nodes_infos . erase ( i ) ;
2018-08-05 05:08:51 +02:00
}
}
2019-01-25 04:15:52 +01:00
//
// Advance the list to the next candidate for a reward
//
2018-06-29 06:47:00 +02:00
{
2019-01-25 04:15:52 +01:00
crypto : : public_key winner_pubkey = cryptonote : : get_service_node_winner_from_tx_extra ( block . miner_tx . extra ) ;
2019-07-18 09:42:15 +02:00
auto it = service_nodes_infos . find ( winner_pubkey ) ;
if ( it ! = service_nodes_infos . end ( ) )
2019-01-25 04:15:52 +01:00
{
// set the winner as though it was re-registering at transaction index=UINT32_MAX for this block
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
auto & info = duplicate_info ( it - > second ) ;
info . last_reward_block_height = block_height ;
info . last_reward_transaction_index = UINT32_MAX ;
2019-01-25 04:15:52 +01:00
}
2018-06-29 06:47:00 +02:00
}
2019-01-25 04:15:52 +01:00
//
// Process TXs in the Block
//
for ( uint32_t index = 0 ; index < txs . size ( ) ; + + index )
2018-06-29 06:47:00 +02:00
{
2019-06-11 20:53:46 +02:00
const cryptonote : : transaction & tx = txs [ index ] ;
if ( tx . type = = cryptonote : : txtype : : standard )
2019-01-25 04:15:52 +01:00
{
2019-07-18 09:42:15 +02:00
process_registration_tx ( nettype , block , tx , index , my_pubkey ) ;
need_swarm_update + = process_contribution_tx ( nettype , block , tx , index ) ;
2018-11-16 00:32:56 +01:00
}
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
else if ( tx . type = = cryptonote : : txtype : : state_change )
2019-01-25 04:15:52 +01:00
{
2019-07-18 09:42:15 +02:00
need_swarm_update + = process_state_change_tx ( state_history , alt_states , nettype , block , tx , my_pubkey ) ;
2019-01-25 04:15:52 +01:00
}
2019-06-11 20:53:46 +02:00
else if ( tx . type = = cryptonote : : txtype : : key_image_unlock )
2019-01-25 04:15:52 +01:00
{
2019-07-18 09:42:15 +02:00
process_key_image_unlock_tx ( nettype , block_height , tx ) ;
}
}
2019-01-25 04:15:52 +01:00
2019-07-18 09:42:15 +02:00
if ( need_swarm_update )
{
crypto : : hash const block_hash = cryptonote : : get_block_hash ( block ) ;
uint64_t seed = 0 ;
std : : memcpy ( & seed , block_hash . data , sizeof ( seed ) ) ;
2019-01-25 04:15:52 +01:00
2019-07-18 09:42:15 +02:00
/// Gather existing swarms from infos
swarm_snode_map_t existing_swarms ;
for ( const auto & key_info : active_service_nodes_infos ( ) )
existing_swarms [ key_info . second - > swarm_id ] . push_back ( key_info . first ) ;
2019-01-25 04:15:52 +01:00
2019-07-18 09:42:15 +02:00
calc_swarm_changes ( existing_swarms , seed ) ;
/// Apply changes
for ( const auto entry : existing_swarms ) {
const swarm_id_t swarm_id = entry . first ;
const std : : vector < crypto : : public_key > & snodes = entry . second ;
for ( const auto snode : snodes ) {
auto & sn_info_ptr = service_nodes_infos . at ( snode ) ;
if ( sn_info_ptr - > swarm_id = = swarm_id ) continue ; /// nothing changed for this snode
duplicate_info ( sn_info_ptr ) . swarm_id = swarm_id ;
2019-01-25 04:15:52 +01:00
}
2019-07-18 09:42:15 +02:00
}
}
2019-09-02 05:20:05 +02:00
2019-09-06 06:29:23 +02:00
quorums = generate_quorums ( nettype , hf_version , * this ) ;
2019-07-18 09:42:15 +02:00
}
2019-01-25 04:15:52 +01:00
2019-07-18 09:42:15 +02:00
void service_node_list : : process_block ( const cryptonote : : block & block , const std : : vector < cryptonote : : transaction > & txs )
{
uint64_t block_height = cryptonote : : get_block_height ( block ) ;
2019-08-28 07:54:02 +02:00
uint8_t hf_version = m_blockchain . get_hard_fork_version ( block_height ) ;
2019-01-25 04:15:52 +01:00
2019-08-28 07:54:02 +02:00
if ( hf_version < 9 )
2019-07-18 09:42:15 +02:00
return ;
//
// Cull old history
//
{
2019-08-28 07:54:02 +02:00
uint64_t cull_height = short_term_state_cull_height ( hf_version , m_db , block_height ) ;
2019-07-18 09:42:15 +02:00
auto end_it = m_state_history . upper_bound ( cull_height ) ;
for ( auto it = m_state_history . begin ( ) ; it ! = end_it ; it + + )
{
if ( m_store_quorum_history )
m_old_quorum_states . emplace_back ( it - > height , it - > quorums ) ;
uint64_t next_long_term_state = ( ( it - > height / STORE_LONG_TERM_STATE_INTERVAL ) + 1 ) * STORE_LONG_TERM_STATE_INTERVAL ;
uint64_t dist_to_next_long_term_state = next_long_term_state - it - > height ;
bool need_quorum_for_future_states = ( dist_to_next_long_term_state < = VOTE_LIFETIME + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER ) ;
if ( ( it - > height % STORE_LONG_TERM_STATE_INTERVAL ) = = 0 | | need_quorum_for_future_states )
2019-01-25 04:15:52 +01:00
{
2019-07-18 09:42:15 +02:00
m_state_added_to_archive = true ;
if ( need_quorum_for_future_states ) // Preserve just quorum
2019-01-25 04:15:52 +01:00
{
2019-07-18 09:42:15 +02:00
state_t & state = const_cast < state_t & > ( * it ) ; // safe: set order only depends on state_t.height
state . service_nodes_infos = { } ;
state . key_image_blacklist = { } ;
state . only_loaded_quorums = true ;
2019-01-25 04:15:52 +01:00
}
2019-07-18 09:42:15 +02:00
m_state_archive . emplace_hint ( m_state_archive . end ( ) , std : : move ( * it ) ) ;
2019-01-25 04:15:52 +01:00
}
2018-11-16 00:32:56 +01:00
}
2019-07-18 09:42:15 +02:00
m_state_history . erase ( m_state_history . begin ( ) , end_it ) ;
if ( m_old_quorum_states . size ( ) > m_store_quorum_history )
m_old_quorum_states . erase ( m_old_quorum_states . begin ( ) , m_old_quorum_states . begin ( ) + ( m_old_quorum_states . size ( ) - m_store_quorum_history ) ) ;
2018-06-29 06:47:00 +02:00
}
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
2019-07-18 09:42:15 +02:00
//
// Cull alt state history
//
2019-09-06 06:29:23 +02:00
if ( hf_version > = cryptonote : : network_version_12_checkpointing )
2019-07-18 09:42:15 +02:00
{
cryptonote : : checkpoint_t immutable_checkpoint ;
if ( m_db - > get_immutable_checkpoint ( & immutable_checkpoint , block_height ) )
{
for ( auto it = m_alt_state . begin ( ) ; it ! = m_alt_state . end ( ) ; )
{
state_t const & alt_state = it - > second ;
if ( alt_state . height < immutable_checkpoint . height ) it = m_alt_state . erase ( it ) ;
else it + + ;
}
}
}
2019-09-06 06:33:07 +02:00
cryptonote : : network_type nettype = m_blockchain . nettype ( ) ;
2019-07-18 09:42:15 +02:00
m_state_history . insert ( m_state_history . end ( ) , m_state ) ;
2019-09-06 06:33:07 +02:00
m_state . update_from_block ( * m_db , nettype , m_state_history , m_alt_state , block , txs , m_service_node_pubkey ) ;
2018-06-29 06:47:00 +02:00
}
void service_node_list : : blockchain_detached ( uint64_t height )
{
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-01-25 04:15:52 +01:00
2019-09-02 05:20:05 +02:00
uint64_t revert_to_height = height - 1 ;
bool reinitialise = false ;
bool using_archive = false ;
2019-07-25 06:35:36 +02:00
{
2019-09-02 05:20:05 +02:00
auto it = m_state_history . find ( revert_to_height ) ; // Try finding detached height directly
2019-08-17 03:54:55 +02:00
reinitialise = ( it = = m_state_history . end ( ) | | it - > only_loaded_quorums ) ;
if ( ! reinitialise )
m_state_history . erase ( std : : next ( it ) , m_state_history . end ( ) ) ;
2019-07-25 06:35:36 +02:00
}
2019-07-22 06:21:35 +02:00
2019-08-17 03:54:55 +02:00
// TODO(loki): We should loop through the prev 10k heights for robustness, but avoid for v4.0.5. Already enough changes going in
if ( reinitialise ) // Try finding the next closest old state at 10k intervals
2019-07-25 06:35:36 +02:00
{
2019-09-02 05:20:05 +02:00
uint64_t prev_interval = revert_to_height - ( revert_to_height % STORE_LONG_TERM_STATE_INTERVAL ) ;
2019-08-27 05:28:46 +02:00
auto it = m_state_archive . find ( prev_interval ) ;
reinitialise = ( it = = m_state_archive . end ( ) | | it - > only_loaded_quorums ) ;
2019-08-17 03:54:55 +02:00
if ( ! reinitialise )
2019-08-27 05:28:46 +02:00
{
m_state_history . clear ( ) ;
m_state_archive . erase ( std : : next ( it ) , m_state_archive . end ( ) ) ;
using_archive = true ;
}
2019-07-25 06:35:36 +02:00
}
2019-08-12 01:53:05 +02:00
2019-08-27 05:28:46 +02:00
if ( reinitialise )
2019-07-25 06:35:36 +02:00
{
m_state_history . clear ( ) ;
2019-08-27 05:28:46 +02:00
m_state_archive . clear ( ) ;
2019-07-25 06:35:36 +02:00
init ( ) ;
return ;
2018-06-29 06:47:00 +02:00
}
Service Node Deregister Part 5 (#89)
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* core, service_node_list: separated address from service node pubkey
* Retrieve quorum list from height, reviewed
* Setup data structures for de/register TX
* Submit and validate partial/full deregisters
* Add P2P relaying of partial deregistration votes
* Code review adjustments for deregistration part 1
- Fix check_tx_semantic
- Remove signature_pod as votes are now stored as blobs. Serialization
overrides don't intefere with crypto::signature anymore.
* deregistration_vote_pool - changed sign/verify interface and removed repeated code
* Misc review, fix sign/verify api, vote threshold
* Deregister/tx edge case handling for combinatoric votes
* Store service node lists for the duration of deregister lifetimes
* Quorum min/max bug, sort node list, fix node to test list
* Change quorum to store acc pub address, fix oob bug
* Code review for expiring votes, acc keys to pub_key, improve err msgs
* Add early out for is_deregistration_tx and protect against quorum changes
* Remove debug code, fix segfault
* Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states
Incorrect assumption that a transaction can be kept in the chain if it could
eventually become invalid, because if it were the chain would be split and
eventually these transaction would be dropped. But also that we should not
override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
2019-07-18 09:42:15 +02:00
std : : set < state_t > & history = ( using_archive ) ? m_state_archive : m_state_history ;
2019-08-27 05:28:46 +02:00
auto it = std : : prev ( history . end ( ) ) ;
2019-08-11 05:46:59 +02:00
m_state = std : : move ( * it ) ;
2019-08-27 05:28:46 +02:00
history . erase ( it ) ;
2019-07-19 09:33:29 +02:00
2019-09-02 05:20:05 +02:00
if ( m_state . height ! = revert_to_height )
2019-08-02 03:07:37 +02:00
rescan_starting_from_curr_state ( false /*store_to_disk*/ ) ;
2018-08-16 05:08:36 +02:00
store ( ) ;
2018-06-29 06:47:00 +02:00
}
2019-09-06 06:33:07 +02:00
std : : vector < crypto : : public_key > service_node_list : : state_t : : get_expired_nodes ( cryptonote : : BlockchainDB const & db ,
cryptonote : : network_type nettype ,
uint8_t hf_version ,
uint64_t block_height ) const
2018-06-29 06:47:00 +02:00
{
2018-07-18 08:51:26 +02:00
std : : vector < crypto : : public_key > expired_nodes ;
2019-09-06 06:33:07 +02:00
uint64_t const lock_blocks = staking_num_lock_blocks ( nettype ) ;
2018-08-08 06:57:07 +02:00
2019-01-25 04:15:52 +01:00
// TODO(loki): This should really use the registration height instead of getting the block and expiring nodes.
// But there's something subtly off when using registration height causing syncing problems.
2019-09-06 06:33:07 +02:00
if ( hf_version = = cryptonote : : network_version_9_service_nodes )
2018-06-29 06:47:00 +02:00
{
2019-09-02 05:20:05 +02:00
if ( block_height < = lock_blocks )
2019-01-25 04:15:52 +01:00
return expired_nodes ;
2018-06-29 06:47:00 +02:00
2018-10-16 01:32:35 +02:00
const uint64_t expired_nodes_block_height = block_height - lock_blocks ;
2019-09-06 06:33:07 +02:00
cryptonote : : block block = { } ;
try
2018-06-29 06:47:00 +02:00
{
2019-09-02 05:20:05 +02:00
block = db . get_block_from_height ( expired_nodes_block_height ) ;
2018-10-16 01:32:35 +02:00
}
2019-09-06 06:33:07 +02:00
catch ( std : : exception const & e )
2018-10-16 01:32:35 +02:00
{
2019-09-06 06:33:07 +02:00
LOG_ERROR ( " Failed to get historical block to find expired nodes in v9: " < < e . what ( ) ) ;
2018-10-16 01:32:35 +02:00
return expired_nodes ;
}
2019-09-02 05:20:05 +02:00
if ( block . major_version < cryptonote : : network_version_9_service_nodes )
return expired_nodes ;
2019-09-06 06:33:07 +02:00
for ( crypto : : hash const & hash : block . tx_hashes )
2018-10-16 01:32:35 +02:00
{
2019-09-06 06:33:07 +02:00
cryptonote : : transaction tx ;
if ( ! db . get_tx ( hash , tx ) )
{
LOG_ERROR ( " Failed to get historical tx to find expired service nodes in v9 " ) ;
continue ;
}
uint32_t index = 0 ;
2018-10-16 01:32:35 +02:00
crypto : : public_key key ;
2018-11-16 00:32:56 +01:00
service_node_info info = { } ;
2019-09-06 06:33:07 +02:00
if ( is_registration_tx ( nettype , cryptonote : : network_version_9_service_nodes , tx , block . timestamp , expired_nodes_block_height , index , key , info ) )
2018-10-16 01:32:35 +02:00
expired_nodes . push_back ( key ) ;
index + + ;
2018-06-29 06:47:00 +02:00
}
2019-09-06 06:33:07 +02:00
2018-06-29 06:47:00 +02:00
}
2019-01-25 04:15:52 +01:00
else
{
2019-07-18 09:42:15 +02:00
for ( auto it = service_nodes_infos . begin ( ) ; it ! = service_nodes_infos . end ( ) ; it + + )
2019-01-25 04:15:52 +01:00
{
crypto : : public_key const & snode_key = it - > first ;
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
const service_node_info & info = * it - > second ;
2019-09-06 06:33:07 +02:00
if ( info . registration_hf_version > = cryptonote : : network_version_11_infinite_staking )
2019-01-25 04:15:52 +01:00
{
if ( info . requested_unlock_height ! = KEY_IMAGE_AWAITING_UNLOCK_HEIGHT & & block_height > info . requested_unlock_height )
expired_nodes . push_back ( snode_key ) ;
}
else // Version 10 Bulletproofs
{
2019-07-18 09:42:15 +02:00
/// Note: this code exhibits a subtle unintended behaviour: a snode that
2019-03-18 00:48:50 +01:00
/// registered in hardfork 9 and was scheduled for deregistration in hardfork 10
/// will have its life is slightly prolonged by the "grace period", although it might
/// look like we use the registration height to determine the expiry height.
2019-01-25 04:15:52 +01:00
uint64_t node_expiry_height = info . registration_height + lock_blocks + STAKING_REQUIREMENT_LOCK_BLOCKS_EXCESS ;
if ( block_height > node_expiry_height )
expired_nodes . push_back ( snode_key ) ;
}
}
}
2018-06-29 06:47:00 +02:00
return expired_nodes ;
}
2019-08-30 08:04:58 +02:00
block_winner service_node_list : : state_t : : get_block_winner ( ) const
2018-07-18 08:51:26 +02:00
{
2019-08-30 08:04:58 +02:00
block_winner result = { } ;
service_node_info const * info = nullptr ;
{
auto oldest_waiting = std : : make_tuple ( std : : numeric_limits < uint64_t > : : max ( ) , std : : numeric_limits < uint32_t > : : max ( ) , crypto : : null_pkey ) ;
for ( const auto & info_it : service_nodes_infos )
{
const auto & sninfo = * info_it . second ;
if ( sninfo . is_active ( ) )
{
auto waiting_since = std : : make_tuple ( sninfo . last_reward_block_height , sninfo . last_reward_transaction_index , info_it . first ) ;
if ( waiting_since < oldest_waiting )
{
oldest_waiting = waiting_since ;
info = & sninfo ;
}
}
}
result . key = std : : get < 2 > ( oldest_waiting ) ;
}
2018-07-28 08:27:04 +02:00
2019-08-30 08:04:58 +02:00
if ( result . key = = crypto : : null_pkey )
{
result = service_nodes : : null_block_winner ;
return result ;
}
2018-08-07 04:28:59 +02:00
2018-08-06 15:08:44 +02:00
// Add contributors and their portions to winners.
2019-08-30 08:04:58 +02:00
result . payouts . reserve ( info - > contributors . size ( ) ) ;
const uint64_t remaining_portions = STAKING_PORTIONS - info - > portions_for_operator ;
for ( const auto & contributor : info - > contributors )
2018-08-06 15:08:44 +02:00
{
2018-07-28 08:27:04 +02:00
uint64_t hi , lo , resulthi , resultlo ;
2018-08-07 05:21:56 +02:00
lo = mul128 ( contributor . amount , remaining_portions , & hi ) ;
2019-08-30 08:04:58 +02:00
div128_64 ( hi , lo , info - > staking_requirement , & resulthi , & resultlo ) ;
2018-08-07 04:28:59 +02:00
2019-08-30 08:04:58 +02:00
if ( contributor . address = = info - > operator_address )
resultlo + = info - > portions_for_operator ;
result . payouts . push_back ( { contributor . address , resultlo } ) ;
2018-07-28 08:27:04 +02:00
}
2019-08-30 08:04:58 +02:00
return result ;
2018-06-29 06:47:00 +02:00
}
2019-08-30 05:54:49 +02:00
bool service_node_list : : validate_miner_tx ( const crypto : : hash & prev_id , const cryptonote : : transaction & miner_tx , uint64_t height , int hf_version , cryptonote : : block_reward_parts const & reward_parts ) const
2018-06-29 06:47:00 +02:00
{
2018-10-15 03:22:46 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-08-30 05:54:49 +02:00
if ( hf_version < 9 )
2018-06-29 06:47:00 +02:00
return true ;
2018-11-12 04:02:21 +01:00
// NOTE(loki): Service node reward distribution is calculated from the
// original amount, i.e. 50% of the original base reward goes to service
// nodes not 50% of the reward after removing the governance component (the
// adjusted base reward post hardfork 10).
uint64_t base_reward = reward_parts . original_base_reward ;
2019-08-30 05:54:49 +02:00
uint64_t total_service_node_reward = cryptonote : : service_node_reward_formula ( base_reward , hf_version ) ;
2018-06-29 06:47:00 +02:00
2019-08-30 08:04:58 +02:00
block_winner winner = m_state . get_block_winner ( ) ;
2018-07-18 08:51:26 +02:00
crypto : : public_key check_winner_pubkey = cryptonote : : get_service_node_winner_from_tx_extra ( miner_tx . extra ) ;
2019-08-30 08:04:58 +02:00
if ( winner . key ! = check_winner_pubkey )
{
MERROR ( " Service node reward winner is incorrect! Expected " < < winner . key < < " , block has " < < check_winner_pubkey ) ;
2018-06-29 06:47:00 +02:00
return false ;
2019-03-19 08:29:27 +01:00
}
2018-06-29 06:47:00 +02:00
2019-08-30 08:04:58 +02:00
if ( ( miner_tx . vout . size ( ) - 1 ) < winner . payouts . size ( ) )
{
MERROR ( " Service node reward specifies more winners than available outputs: " < < ( miner_tx . vout . size ( ) - 1 ) < < " , winners: " < < winner . payouts . size ( ) ) ;
2018-06-29 06:47:00 +02:00
return false ;
2019-08-30 08:04:58 +02:00
}
2018-06-29 06:47:00 +02:00
2019-08-30 08:04:58 +02:00
for ( size_t i = 0 ; i < winner . payouts . size ( ) ; i + + )
2018-06-29 06:47:00 +02:00
{
2019-08-30 08:04:58 +02:00
size_t vout_index = i + 1 ;
payout_entry const & payout = winner . payouts [ i ] ;
uint64_t reward = cryptonote : : get_portion_of_reward ( payout . portions , total_service_node_reward ) ;
2018-06-29 06:47:00 +02:00
2018-07-18 08:51:26 +02:00
if ( miner_tx . vout [ vout_index ] . amount ! = reward )
{
MERROR ( " Service node reward amount incorrect. Should be " < < cryptonote : : print_money ( reward ) < < " , is: " < < cryptonote : : print_money ( miner_tx . vout [ vout_index ] . amount ) ) ;
return false ;
}
2018-06-29 06:47:00 +02:00
2018-07-18 08:51:26 +02:00
if ( miner_tx . vout [ vout_index ] . target . type ( ) ! = typeid ( cryptonote : : txout_to_key ) )
{
MERROR ( " Service node output target type should be txout_to_key " ) ;
return false ;
}
2019-08-30 08:04:58 +02:00
crypto : : key_derivation derivation = AUTO_VAL_INIT ( derivation ) ;
2018-07-18 08:51:26 +02:00
crypto : : public_key out_eph_public_key = AUTO_VAL_INIT ( out_eph_public_key ) ;
2019-08-30 08:04:58 +02:00
cryptonote : : keypair gov_key = cryptonote : : get_deterministic_keypair_from_height ( height ) ;
2018-07-18 08:51:26 +02:00
2019-08-30 08:04:58 +02:00
bool r = crypto : : generate_key_derivation ( payout . address . m_view_public_key , gov_key . sec , derivation ) ;
CHECK_AND_ASSERT_MES ( r , false , " while creating outs: failed to generate_key_derivation( " < < payout . address . m_view_public_key < < " , " < < gov_key . sec < < " ) " ) ;
r = crypto : : derive_public_key ( derivation , vout_index , payout . address . m_spend_public_key , out_eph_public_key ) ;
CHECK_AND_ASSERT_MES ( r , false , " while creating outs: failed to derive_public_key( " < < derivation < < " , " < < vout_index < < " , " < < payout . address . m_spend_public_key < < " ) " ) ;
2018-07-18 08:51:26 +02:00
if ( boost : : get < cryptonote : : txout_to_key > ( miner_tx . vout [ vout_index ] . target ) . key ! = out_eph_public_key )
{
MERROR ( " Invalid service node reward output " ) ;
return false ;
}
2018-06-29 06:47:00 +02:00
}
return true ;
}
2019-08-28 07:54:02 +02:00
bool service_node_list : : alt_block_added ( cryptonote : : block const & block , std : : vector < cryptonote : : transaction > const & txs , cryptonote : : checkpoint_t const * checkpoint )
2019-07-18 09:42:15 +02:00
{
2019-08-28 07:54:02 +02:00
if ( block . major_version < cryptonote : : network_version_9_service_nodes )
return true ;
2019-07-18 09:42:15 +02:00
uint64_t block_height = cryptonote : : get_block_height ( block ) ;
state_t const * starting_state = nullptr ;
crypto : : hash const block_hash = get_block_hash ( block ) ;
auto it = m_alt_state . find ( block_hash ) ;
2019-08-28 07:54:02 +02:00
if ( it ! = m_alt_state . end ( ) ) return true ; // NOTE: Already processed alt-state for this block
2019-07-18 09:42:15 +02:00
// NOTE: Check if alt block forks off some historical state on the canonical chain
if ( ! starting_state )
{
2019-09-02 05:20:05 +02:00
auto it = m_state_history . find ( block_height - 1 ) ;
2019-07-18 09:42:15 +02:00
if ( it ! = m_state_history . end ( ) )
if ( block . prev_id = = it - > block_hash ) starting_state = & ( * it ) ;
}
// NOTE: Check if alt block forks off some historical alt state on an alt chain
if ( ! starting_state )
{
auto it = m_alt_state . find ( block . prev_id ) ;
if ( it ! = m_alt_state . end ( ) ) starting_state = & it - > second ;
}
2019-08-28 07:54:02 +02:00
if ( ! starting_state )
2019-07-18 09:42:15 +02:00
{
2019-08-28 07:54:02 +02:00
LOG_PRINT_L1 ( " Received alt block but couldn't find parent state in historical state " ) ;
return false ;
2019-07-18 09:42:15 +02:00
}
2019-08-28 07:54:02 +02:00
if ( starting_state - > block_hash ! = block . prev_id )
2019-07-18 09:42:15 +02:00
{
2019-08-28 07:54:02 +02:00
LOG_PRINT_L1 ( " Unexpected state_t's hash: " < < starting_state - > block_hash
< < " , does not match the block prev hash: " < < block . prev_id ) ;
return false ;
2019-07-18 09:42:15 +02:00
}
state_t alt_state = * starting_state ;
2019-09-06 06:33:07 +02:00
alt_state . update_from_block ( * m_db , m_blockchain . nettype ( ) , m_state_history , m_alt_state , block , txs , m_service_node_pubkey ) ;
2019-07-18 09:42:15 +02:00
m_alt_state [ block_hash ] = std : : move ( alt_state ) ;
2019-08-28 07:54:02 +02:00
if ( checkpoint )
{
std : : vector < std : : shared_ptr < const service_nodes : : testing_quorum > > alt_quorums ;
std : : shared_ptr < const testing_quorum > quorum = get_testing_quorum ( quorum_type : : checkpointing , checkpoint - > height , false , & alt_quorums ) ;
if ( ! quorum )
return false ;
if ( ! service_nodes : : verify_checkpoint ( block . major_version , * checkpoint , * quorum ) )
{
bool verified_on_alt_quorum = false ;
for ( std : : shared_ptr < const service_nodes : : testing_quorum > alt_quorum : alt_quorums )
{
if ( service_nodes : : verify_checkpoint ( block . major_version , * checkpoint , * alt_quorum ) )
{
verified_on_alt_quorum = true ;
break ;
}
}
if ( ! verified_on_alt_quorum )
return false ;
}
}
return true ;
2019-07-18 09:42:15 +02:00
}
2018-06-29 06:47:00 +02:00
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
2019-08-02 03:07:37 +02:00
static service_node_list : : quorum_for_serialization serialize_quorum_state ( uint8_t hf_version , uint64_t height , quorum_manager const & quorums )
{
service_node_list : : quorum_for_serialization result = { } ;
result . height = height ;
if ( quorums . obligations ) result . quorums [ static_cast < uint8_t > ( quorum_type : : obligations ) ] = * quorums . obligations ;
if ( quorums . checkpointing ) result . quorums [ static_cast < uint8_t > ( quorum_type : : checkpointing ) ] = * quorums . checkpointing ;
return result ;
}
2019-08-17 03:54:55 +02:00
static service_node_list : : state_serialized serialize_service_node_state_object ( uint8_t hf_version , service_node_list : : state_t const & state , bool only_serialize_quorums = false )
2019-07-15 10:08:52 +02:00
{
service_node_list : : state_serialized result = { } ;
2019-08-16 07:30:36 +02:00
result . version = service_node_list : : state_serialized : : get_version ( hf_version ) ;
2019-08-17 03:54:55 +02:00
result . height = state . height ;
result . quorums = serialize_quorum_state ( hf_version , state . height , state . quorums ) ;
result . only_stored_quorums = state . only_loaded_quorums | | only_serialize_quorums ;
if ( only_serialize_quorums )
return result ;
2019-08-02 03:07:37 +02:00
2019-07-15 10:08:52 +02:00
result . infos . reserve ( state . service_nodes_infos . size ( ) ) ;
for ( const auto & kv_pair : state . service_nodes_infos )
2019-08-11 05:46:59 +02:00
result . infos . emplace_back ( kv_pair ) ;
2019-07-15 10:08:52 +02:00
result . key_image_blacklist = state . key_image_blacklist ;
2019-07-18 09:42:15 +02:00
result . block_hash = state . block_hash ;
2019-07-15 10:08:52 +02:00
return result ;
}
2018-08-16 05:08:36 +02:00
bool service_node_list : : store ( )
{
2019-06-29 06:25:52 +02:00
if ( ! m_db )
return false ; // Haven't been initialized yet
2019-06-26 06:00:05 +02:00
uint8_t hf_version = m_blockchain . get_current_hard_fork_version ( ) ;
2019-04-23 02:53:17 +02:00
if ( hf_version < cryptonote : : network_version_9_service_nodes )
2019-01-25 04:15:52 +01:00
return true ;
2018-10-15 03:22:46 +02:00
2019-08-17 03:54:55 +02:00
data_for_serialization * data [ ] = { & m_cache_long_term_data , & m_cache_short_term_data } ;
auto const serialize_version = data_for_serialization : : get_version ( hf_version ) ;
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-08-16 10:20:23 +02:00
for ( data_for_serialization * serialize_entry : data )
2018-08-17 07:14:45 +02:00
{
2019-08-27 05:28:46 +02:00
if ( serialize_entry - > version ! = serialize_version ) m_state_added_to_archive = true ;
2019-08-16 10:20:23 +02:00
serialize_entry - > version = serialize_version ;
serialize_entry - > clear ( ) ;
}
2019-04-23 02:53:17 +02:00
2019-08-17 03:54:55 +02:00
m_cache_short_term_data . quorum_states . reserve ( m_old_quorum_states . size ( ) ) ;
for ( const quorums_by_height & entry : m_old_quorum_states )
m_cache_short_term_data . quorum_states . push_back ( serialize_quorum_state ( hf_version , entry . height , entry . quorums ) ) ;
2019-08-13 05:05:21 +02:00
2019-04-23 02:53:17 +02:00
2019-08-27 05:28:46 +02:00
if ( m_state_added_to_archive )
{
for ( auto const & it : m_state_archive )
m_cache_long_term_data . states . push_back ( serialize_service_node_state_object ( hf_version , it ) ) ;
}
2019-08-17 03:54:55 +02:00
// NOTE: A state_t may reference quorums up to (VOTE_LIFETIME
// + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER) blocks back. So in the
2019-08-27 05:28:46 +02:00
// (MAX_SHORT_TERM_STATE_HISTORY | 2nd oldest checkpoint) window of states we store, the
2019-08-17 03:54:55 +02:00
// first (VOTE_LIFETIME + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER) states we only
// store their quorums, such that the following states have quorum
// information preceeding it.
2019-08-12 01:53:05 +02:00
2019-08-30 05:54:49 +02:00
uint64_t const max_short_term_height = short_term_state_cull_height ( hf_version , m_db , ( m_state . height - 1 ) ) + VOTE_LIFETIME + VOTE_OR_TX_VERIFY_HEIGHT_BUFFER ;
2019-08-27 05:28:46 +02:00
for ( auto it = m_state_history . begin ( ) ;
it ! = m_state_history . end ( ) & & it - > height < = max_short_term_height ;
2019-08-17 03:54:55 +02:00
it + + )
{
// TODO(loki): There are 2 places where we convert a state_t to be a serialized state_t without quorums. We should only do this in one location for clarity.
m_cache_short_term_data . states . push_back ( serialize_service_node_state_object ( hf_version , * it , it - > height < max_short_term_height /*only_serialize_quorums*/ ) ) ;
}
m_cache_data_blob . clear ( ) ;
2019-08-27 05:28:46 +02:00
if ( m_state_added_to_archive )
2019-07-22 04:32:21 +02:00
{
std : : stringstream ss ;
binary_archive < true > ba ( ss ) ;
2019-08-17 03:54:55 +02:00
bool r = : : serialization : : serialize ( ba , m_cache_long_term_data ) ;
2019-08-16 10:20:23 +02:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to store service node info: failed to serialize long term data " ) ;
2019-08-17 03:54:55 +02:00
m_cache_data_blob . append ( ss . str ( ) ) ;
2019-08-16 10:20:23 +02:00
{
cryptonote : : db_wtxn_guard txn_guard ( m_db ) ;
2019-08-17 03:54:55 +02:00
m_db - > set_service_node_data ( m_cache_data_blob , true /*long_term*/ ) ;
2019-08-16 10:20:23 +02:00
}
}
2018-08-16 06:13:57 +02:00
2019-08-17 03:54:55 +02:00
m_cache_data_blob . clear ( ) ;
2019-08-16 10:20:23 +02:00
{
std : : stringstream ss ;
binary_archive < true > ba ( ss ) ;
2019-08-17 03:54:55 +02:00
bool r = : : serialization : : serialize ( ba , m_cache_short_term_data ) ;
2019-08-16 10:20:23 +02:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to store service node info: failed to serialize short term data data " ) ;
2019-08-17 03:54:55 +02:00
m_cache_data_blob . append ( ss . str ( ) ) ;
2019-08-16 10:20:23 +02:00
{
cryptonote : : db_wtxn_guard txn_guard ( m_db ) ;
2019-08-17 03:54:55 +02:00
m_db - > set_service_node_data ( m_cache_data_blob , false /*long_term*/ ) ;
2019-08-16 10:20:23 +02:00
}
2019-07-22 04:32:21 +02:00
}
2019-07-15 10:08:52 +02:00
2019-08-27 05:28:46 +02:00
m_state_added_to_archive = false ;
2018-08-16 05:08:36 +02:00
return true ;
}
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
void service_node_list : : get_all_service_nodes_public_keys ( std : : vector < crypto : : public_key > & keys , bool require_active ) const
2019-01-25 00:50:42 +01:00
{
keys . clear ( ) ;
2019-07-15 10:08:52 +02:00
keys . reserve ( m_state . service_nodes_infos . size ( ) ) ;
2019-01-25 00:50:42 +01:00
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
if ( require_active ) {
2019-07-15 10:08:52 +02:00
for ( const auto & key_info : m_state . service_nodes_infos )
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
if ( key_info . second - > is_active ( ) )
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
keys . push_back ( key_info . first ) ;
2019-01-25 00:50:42 +01:00
}
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
else {
2019-07-15 10:08:52 +02:00
for ( const auto & key_info : m_state . service_nodes_infos )
Relax deregistration rules
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
keys . push_back ( key_info . first ) ;
2019-01-25 00:50:42 +01:00
}
}
2019-08-09 04:52:39 +02:00
static crypto : : hash make_uptime_proof_hash ( crypto : : public_key const & pubkey , uint64_t timestamp , uint32_t pub_ip , uint16_t storage_port )
2019-06-27 09:05:44 +02:00
{
constexpr size_t BUFFER_SIZE = sizeof ( pubkey ) + sizeof ( timestamp ) + sizeof ( pub_ip ) + sizeof ( storage_port ) ;
boost : : endian : : native_to_little_inplace ( timestamp ) ;
boost : : endian : : native_to_little_inplace ( pub_ip ) ;
boost : : endian : : native_to_little_inplace ( storage_port ) ;
char buf [ BUFFER_SIZE ] ;
crypto : : hash result ;
memcpy ( buf , reinterpret_cast < const void * > ( & pubkey ) , sizeof ( pubkey ) ) ;
memcpy ( buf + sizeof ( pubkey ) , reinterpret_cast < const void * > ( & timestamp ) , sizeof ( timestamp ) ) ;
memcpy ( buf + sizeof ( pubkey ) + sizeof ( timestamp ) , reinterpret_cast < const void * > ( & pub_ip ) , sizeof ( pub_ip ) ) ;
memcpy ( buf + sizeof ( pubkey ) + sizeof ( timestamp ) + sizeof ( pub_ip ) , reinterpret_cast < const void * > ( & storage_port ) , sizeof ( storage_port ) ) ;
crypto : : cn_fast_hash ( buf , sizeof ( buf ) , result ) ;
return result ;
}
cryptonote : : NOTIFY_UPTIME_PROOF : : request service_node_list : : generate_uptime_proof ( crypto : : public_key const & pubkey ,
crypto : : secret_key const & key ,
uint32_t public_ip ,
uint16_t storage_port ) const
{
cryptonote : : NOTIFY_UPTIME_PROOF : : request result = { } ;
result . snode_version_major = static_cast < uint16_t > ( LOKI_VERSION_MAJOR ) ;
result . snode_version_minor = static_cast < uint16_t > ( LOKI_VERSION_MINOR ) ;
result . snode_version_patch = static_cast < uint16_t > ( LOKI_VERSION_PATCH ) ;
result . timestamp = time ( nullptr ) ;
result . pubkey = pubkey ;
result . public_ip = public_ip ;
result . storage_port = storage_port ;
2019-08-09 04:52:39 +02:00
crypto : : hash hash = make_uptime_proof_hash ( pubkey , result . timestamp , public_ip , storage_port ) ;
2019-06-27 09:05:44 +02:00
crypto : : generate_signature ( hash , pubkey , key , result . sig ) ;
return result ;
}
2019-09-18 07:36:00 +02:00
bool service_node_list : : handle_uptime_proof ( cryptonote : : NOTIFY_UPTIME_PROOF : : request const & proof , bool & my_uptime_proof_confirmation )
2019-06-27 09:05:44 +02:00
{
uint8_t const hf_version = m_blockchain . get_current_hard_fork_version ( ) ;
uint64_t const now = time ( nullptr ) ;
// NOTE: Validate proof version, timestamp range,
{
if ( ( proof . timestamp < now - UPTIME_PROOF_BUFFER_IN_SECONDS ) | | ( proof . timestamp > now + UPTIME_PROOF_BUFFER_IN_SECONDS ) )
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey < < " : timestamp is too far from now " ) ;
return false ;
}
// NOTE: Only care about major version for now
2019-09-13 08:31:58 +02:00
if ( hf_version > = cryptonote : : network_version_13_enforce_checkpoints & & proof . snode_version_major < 5 )
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey
< < " : v5+ loki version is required for v13+ network proofs " ) ;
return false ;
}
2019-06-27 09:05:44 +02:00
if ( hf_version > = cryptonote : : network_version_12_checkpointing & & proof . snode_version_major < 4 )
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey
< < " : v4+ loki version is required for v12+ network proofs " ) ;
return false ;
}
else if ( hf_version > = cryptonote : : network_version_11_infinite_staking & & proof . snode_version_major < 3 )
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey < < " : v3+ loki version is required for v11+ network proofs " ) ;
return false ;
}
else if ( hf_version > = cryptonote : : network_version_10_bulletproofs & & proof . snode_version_major < 2 )
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey < < " : v2+ loki version is required for v10+ network proofs " ) ;
return false ;
}
}
//
// NOTE: Validate proof signature
//
{
2019-08-09 04:52:39 +02:00
crypto : : hash hash = make_uptime_proof_hash ( proof . pubkey , proof . timestamp , proof . public_ip , proof . storage_port ) ;
bool signature_ok = crypto : : check_signature ( hash , proof . pubkey , proof . sig ) ;
if ( epee : : net_utils : : is_ip_local ( proof . public_ip ) | | epee : : net_utils : : is_ip_loopback ( proof . public_ip ) ) return false ; // Sanity check; we do the same on lokid startup
2019-06-27 09:05:44 +02:00
if ( ! signature_ok )
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey < < " : signature validation failed " ) ;
return false ;
}
}
2019-06-04 06:15:16 +02:00
2019-06-27 09:05:44 +02:00
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-07-15 10:08:52 +02:00
auto it = m_state . service_nodes_infos . find ( proof . pubkey ) ;
if ( it = = m_state . service_nodes_infos . end ( ) )
2019-06-27 09:05:44 +02:00
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey < < " : no such service node is currently registered " ) ;
return false ;
}
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
const service_node_info & info = * it - > second ;
if ( info . proof - > timestamp > = now - ( UPTIME_PROOF_FREQUENCY_IN_SECONDS / 2 ) )
2019-06-27 09:05:44 +02:00
{
LOG_PRINT_L2 ( " Rejecting uptime proof from " < < proof . pubkey
< < " : already received one uptime proof for this node recently " ) ;
return false ;
}
2019-06-04 06:15:16 +02:00
2019-09-18 03:42:59 +02:00
if ( m_service_node_pubkey & & ( proof . pubkey = = * m_service_node_pubkey ) )
2019-09-18 07:22:27 +02:00
{
2019-09-18 07:36:00 +02:00
my_uptime_proof_confirmation = true ;
2019-09-18 03:42:59 +02:00
MGINFO ( " Received uptime-proof confirmation back from network for Service Node (yours): " < < proof . pubkey ) ;
2019-09-18 07:22:27 +02:00
}
2019-09-18 03:42:59 +02:00
else
2019-09-18 07:22:27 +02:00
{
2019-09-18 07:36:00 +02:00
my_uptime_proof_confirmation = false ;
2019-09-18 03:42:59 +02:00
LOG_PRINT_L2 ( " Accepted uptime proof from " < < proof . pubkey ) ;
2019-09-18 07:22:27 +02:00
}
2019-09-18 03:42:59 +02:00
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
auto & iproof = * info . proof ;
Fix 4.0.4 uptime proof transmission after recomm
When a node gets recommissioned in 4.0.4 we reset its timestamp to the
current time to delay obligations checks for newly recommissioned nodes,
but this reset caused problems:
- the code runs not only when a fresh block is received, but also when
syncing or rescanning, and so time(NULL) gets used to update the
node's timestamp even if it is an old record, and since proof info is
shared across states, the affects the current state.
- as a result of the above, a just-rescanned node that has been
decommissioned at some point in the past will think it has just sent a
proof, and so won't send any proofs for an hour.
- A just-rescanned node won't accept or relay any proofs for any node
that was recommissioned in its scan for the first half an hour, but this
lack of relaying can cause chaos in getting uptime proofs out across the
network, especially while we still have 4.0.3 nodes that need it.
To address the first issue, this switches the recommissioning to use the
block timestamp rather than the current timestamp. This *will* be
slightly delayed in the case of current blocks (since a block timestamp
is the time the pool *started* working on the block, which is generally
the time the previous block was found on the network), but even with an
exceptionally long block delay (e.g. 20 minutes) we are still fending
off obligations checks for 1h40m.
That would partially fix issues 2 and 3, but we actually don't want a
recommissioning to look like a received uptime proof for a couple
reasons:
- When we haven't actually received an uptime proof it's confusing to
report that we have (at the recommission time) and may mask an
underlying issue of a node that isn't actually sending proofs for some
reason (which might be more common for a node that has just been
decommissioned/recommissioned). There's also a related weird state
here for nodes that have come on recently: they think the SN is
active, but have 0's for IP and storage server port.
- 4.0.3 nodes don't get the updated timestamp and so really need the
proof to come through even when the 4.0.4 nodes don't think it's
important/acceptable.
So to also fix these, this commits adds an "effective_timestamp" to the
proof info: if it is larger than the actual timestamp field, we use it
instead of the actual one for obligations checking. On a recommission,
we update only the effective field so that we can delay obligations
checking for a couple of hours without delaying actual proofs info going
over the network.
2019-08-16 19:31:58 +02:00
iproof . update_timestamp ( now ) ;
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
iproof . version_major = proof . snode_version_major ;
iproof . version_minor = proof . snode_version_minor ;
iproof . version_patch = proof . snode_version_patch ;
iproof . public_ip = proof . public_ip ;
iproof . storage_port = proof . storage_port ;
2019-06-26 06:01:40 +02:00
// Track any IP changes (so that the obligations quorum can penalize for IP changes)
// First prune any stale (>1w) ip info. 1 week is probably excessive, but IP switches should be
// rare and this could, in theory, be useful for diagnostics.
Use shared_ptr storage for service_node_info
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
2019-08-11 18:19:24 +02:00
auto & ips = info . proof - > public_ips ;
2019-06-27 04:25:52 +02:00
// If we already know about the IP, update its timestamp:
if ( ips [ 0 ] . first & & ips [ 0 ] . first = = proof . public_ip )
ips [ 0 ] . second = now ;
else if ( ips [ 1 ] . first & & ips [ 1 ] . first = = proof . public_ip )
ips [ 1 ] . second = now ;
// Otherwise replace whichever IP has the older timestamp
else if ( ips [ 0 ] . second > ips [ 1 ] . second )
ips [ 1 ] = { proof . public_ip , now } ;
else
ips [ 0 ] = { proof . public_ip , now } ;
2019-06-27 09:05:44 +02:00
return true ;
2019-06-04 06:15:16 +02:00
}
2019-09-17 04:01:13 +02:00
void service_node_list : : record_checkpoint_vote ( crypto : : public_key const & pubkey , uint64_t height , bool voted )
2019-06-27 09:05:44 +02:00
{
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
2019-07-15 10:08:52 +02:00
auto it = m_state . service_nodes_infos . find ( pubkey ) ;
if ( it = = m_state . service_nodes_infos . end ( ) )
2019-06-27 09:05:44 +02:00
return ;
2019-09-17 04:01:13 +02:00
proof_info & info = * it - > second - > proof ;
info . votes [ info . vote_index ] . height = height ;
info . votes [ info . vote_index ] . voted = voted ;
info . vote_index = ( info . vote_index + 1 ) % info . votes . size ( ) ;
2019-06-27 09:05:44 +02:00
}
2019-06-04 06:15:16 +02:00
2019-09-02 05:06:15 +02:00
bool service_node_list : : set_storage_server_peer_reachable ( crypto : : public_key const & pubkey , bool value )
{
std : : lock_guard < boost : : recursive_mutex > lock ( m_sn_mutex ) ;
auto it = m_state . service_nodes_infos . find ( pubkey ) ;
if ( it = = m_state . service_nodes_infos . end ( ) ) {
LOG_PRINT_L2 ( " No Service Node is known by this pubkey: " < < pubkey ) ;
return false ;
} else {
proof_info & info = * it - > second - > proof ;
2019-09-17 04:01:13 +02:00
if ( info . storage_server_reachable ! = value )
{
info . storage_server_reachable = value ;
2019-09-02 05:06:15 +02:00
LOG_PRINT_L2 ( " Setting reachability status for node " < < pubkey < < " as: " < < ( value ? " true " : " false " ) ) ;
}
2019-09-17 04:01:13 +02:00
info . storage_server_reachable_timestamp = time ( nullptr ) ;
2019-09-02 05:06:15 +02:00
return true ;
}
}
2019-08-12 01:53:05 +02:00
static quorum_manager quorum_for_serialization_to_quorum_manager ( service_node_list : : quorum_for_serialization const & source )
{
quorum_manager result = { } ;
{
auto quorum = std : : make_shared < testing_quorum > ( source . quorums [ static_cast < uint8_t > ( quorum_type : : obligations ) ] ) ;
result . obligations = quorum ;
}
// Don't load any checkpoints that shouldn't exist (see the comment in generate_quorums as to why the `+BUFFER` term is here).
if ( ( source . height + REORG_SAFETY_BUFFER_BLOCKS_POST_HF12 ) % CHECKPOINT_INTERVAL = = 0 )
{
auto quorum = std : : make_shared < testing_quorum > ( source . quorums [ static_cast < uint8_t > ( quorum_type : : checkpointing ) ] ) ;
result . checkpointing = quorum ;
}
return result ;
}
2019-09-06 06:33:07 +02:00
service_node_list : : state_t : : state_t ( cryptonote : : Blockchain const & blockchain , state_serialized & & state )
2019-08-16 07:30:36 +02:00
: height { state . height }
, key_image_blacklist { std : : move ( state . key_image_blacklist ) }
, only_loaded_quorums { state . only_stored_quorums }
2019-07-18 09:42:15 +02:00
, block_hash { state . block_hash }
2019-08-11 05:46:59 +02:00
{
2019-09-06 06:29:23 +02:00
if ( state . version = = state_serialized : : version_t : : version_0 )
block_hash = blockchain . get_block_id_by_height ( height ) ;
2019-08-11 05:46:59 +02:00
for ( auto & pubkey_info : state . infos )
2019-09-06 06:33:07 +02:00
{
if ( pubkey_info . info - > version = = service_node_info : : version_0_checkpointing )
{
const_cast < service_node_info & > ( * pubkey_info . info ) . version = service_node_info : : version_1_add_registration_hf_version ;
const_cast < service_node_info & > ( * pubkey_info . info ) . registration_hf_version = blockchain . get_hard_fork_version ( pubkey_info . info - > registration_height ) ;
}
2019-08-11 05:46:59 +02:00
service_nodes_infos . emplace ( std : : move ( pubkey_info . pubkey ) , std : : move ( pubkey_info . info ) ) ;
2019-09-06 06:33:07 +02:00
}
2019-08-11 05:46:59 +02:00
quorums = quorum_for_serialization_to_quorum_manager ( state . quorums ) ;
}
2019-07-17 08:01:54 +02:00
bool service_node_list : : load ( const uint64_t current_height )
2018-08-16 05:08:36 +02:00
{
2018-08-16 06:13:57 +02:00
LOG_PRINT_L1 ( " service_node_list::load() " ) ;
2019-07-15 10:08:52 +02:00
reset ( false ) ;
2018-08-16 05:08:36 +02:00
if ( ! m_db )
{
return false ;
}
2019-08-16 10:20:23 +02:00
// NOTE: Deserialize long term state history
uint64_t bytes_loaded = 0 ;
2019-05-01 09:32:57 +02:00
cryptonote : : db_rtxn_guard txn_guard ( m_db ) ;
2019-07-15 10:08:52 +02:00
std : : string blob ;
2019-08-16 10:20:23 +02:00
if ( m_db - > get_service_node_data ( blob , true /*long_term*/ ) )
2018-08-16 05:08:36 +02:00
{
2019-08-16 10:20:23 +02:00
bytes_loaded + = blob . size ( ) ;
std : : stringstream ss ;
ss < < blob ;
2019-08-17 03:54:55 +02:00
blob . clear ( ) ;
2019-08-16 10:20:23 +02:00
binary_archive < false > ba ( ss ) ;
2018-08-16 05:08:36 +02:00
2019-08-16 10:20:23 +02:00
data_for_serialization data_in = { } ;
if ( : : serialization : : serialize ( ba , data_in ) & & data_in . states . size ( ) )
2019-07-15 10:08:52 +02:00
{
2019-09-06 06:29:23 +02:00
// NOTE: Previously the quorum for the next state is derived from the
// state that's been updated from the next block. This is fixed in
// version_1.
// So, copy the quorum from (state.height-1) to (state.height), all
// states need to have their (height-1) which means we're missing the
// 10k-th interval and need to generate it based on the last state.
if ( data_in . states [ 0 ] . version = = state_serialized : : version_t : : version_0 )
{
size_t const last_index = data_in . states . size ( ) - 1 ;
if ( ( data_in . states . back ( ) . height % STORE_LONG_TERM_STATE_INTERVAL ) ! = 0 )
{
LOG_PRINT_L0 ( " Last serialised quorum height: " < < data_in . states . back ( ) . height
< < " in archive is unexpectedly not a multiple of: "
< < STORE_LONG_TERM_STATE_INTERVAL < < " , regenerating state " ) ;
return false ;
}
for ( size_t i = data_in . states . size ( ) - 1 ; i > = 1 ; i - - )
{
state_serialized & serialized_entry = data_in . states [ i ] ;
state_serialized & prev_serialized_entry = data_in . states [ i - 1 ] ;
if ( ( prev_serialized_entry . height % STORE_LONG_TERM_STATE_INTERVAL ) = = 0 )
{
// NOTE: drop this entry, we have insufficient data to derive
// sadly, do this as a one off and if we ever need this data we
// need to do a full rescan.
continue ;
}
state_t entry ( m_blockchain , std : : move ( serialized_entry ) ) ;
entry . height - - ;
entry . quorums = quorum_for_serialization_to_quorum_manager ( prev_serialized_entry . quorums ) ;
if ( ( serialized_entry . height % STORE_LONG_TERM_STATE_INTERVAL ) = = 0 )
{
state_t long_term_state = entry ;
cryptonote : : block const & block = m_db - > get_block_from_height ( long_term_state . height + 1 ) ;
std : : vector < cryptonote : : transaction > txs = m_db - > get_tx_list ( block . tx_hashes ) ;
long_term_state . update_from_block ( * m_db , m_blockchain . nettype ( ) , { } /*state_history*/ , { } /*alt_states*/ , block , txs , nullptr /*my_pubkey*/ ) ;
entry . service_nodes_infos = { } ;
entry . key_image_blacklist = { } ;
entry . only_loaded_quorums = true ;
m_state_archive . emplace_hint ( m_state_archive . begin ( ) , std : : move ( long_term_state ) ) ;
}
m_state_archive . emplace_hint ( m_state_archive . begin ( ) , std : : move ( entry ) ) ;
}
}
else
{
for ( state_serialized & entry : data_in . states )
m_state_archive . emplace_hint ( m_state_archive . end ( ) , m_blockchain , std : : move ( entry ) ) ;
}
2019-07-15 10:08:52 +02:00
}
2018-08-16 05:08:36 +02:00
}
2019-08-16 10:20:23 +02:00
// NOTE: Deserialize short term state history
if ( ! m_db - > get_service_node_data ( blob , false ) )
return false ;
2019-08-12 01:53:05 +02:00
2019-08-16 10:20:23 +02:00
bytes_loaded + = blob . size ( ) ;
2019-07-15 10:08:52 +02:00
std : : stringstream ss ;
2018-08-16 05:08:36 +02:00
ss < < blob ;
binary_archive < false > ba ( ss ) ;
2019-08-12 01:53:05 +02:00
2019-08-16 07:34:27 +02:00
data_for_serialization data_in = { } ;
2019-08-16 10:20:23 +02:00
bool deserialized = : : serialization : : serialize ( ba , data_in ) ;
2019-08-16 07:34:27 +02:00
CHECK_AND_ASSERT_MES ( deserialized , false , " Failed to parse service node data from blob " ) ;
2019-07-15 10:08:52 +02:00
2019-08-16 08:15:43 +02:00
if ( data_in . states . empty ( ) )
2019-07-15 10:08:52 +02:00
return false ;
2019-07-17 08:01:54 +02:00
2018-08-17 07:14:45 +02:00
{
2019-08-02 03:07:37 +02:00
const uint64_t hist_state_from_height = current_height - m_store_quorum_history ;
uint64_t last_loaded_height = 0 ;
2019-08-16 07:34:27 +02:00
for ( auto & states : data_in . quorum_states )
2019-07-22 07:16:29 +02:00
{
2019-08-02 03:07:37 +02:00
if ( states . height < hist_state_from_height )
continue ;
2019-08-12 01:53:05 +02:00
quorums_by_height entry = { } ;
entry . height = states . height ;
entry . quorums = quorum_for_serialization_to_quorum_manager ( states ) ;
2019-08-02 03:07:37 +02:00
if ( states . height < = last_loaded_height )
{
LOG_PRINT_L0 ( " Serialised quorums is not stored in ascending order by height in DB, failed to load from DB " ) ;
return false ;
}
last_loaded_height = states . height ;
m_old_quorum_states . push_back ( entry ) ;
}
}
{
2019-08-16 07:34:27 +02:00
assert ( data_in . states . size ( ) > 0 ) ;
2019-09-06 06:29:23 +02:00
size_t const last_index = data_in . states . size ( ) - 1 ;
2019-08-17 03:54:55 +02:00
if ( data_in . states [ last_index ] . only_stored_quorums )
{
LOG_PRINT_L0 ( " Unexpected last serialized state only has quorums loaded " ) ;
return false ;
}
2019-07-18 09:42:15 +02:00
2019-09-06 06:29:23 +02:00
if ( data_in . states [ 0 ] . version = = state_serialized : : version_t : : version_0 )
{
for ( size_t i = last_index ; i > = 1 ; i - - )
{
state_serialized & serialized_entry = data_in . states [ i ] ;
state_serialized & prev_serialized_entry = data_in . states [ i - 1 ] ;
state_t entry ( m_blockchain , std : : move ( serialized_entry ) ) ;
entry . quorums = quorum_for_serialization_to_quorum_manager ( prev_serialized_entry . quorums ) ;
entry . height - - ;
if ( i = = last_index ) m_state = std : : move ( entry ) ;
else m_state_archive . emplace_hint ( m_state_archive . end ( ) , std : : move ( entry ) ) ;
}
}
else
{
size_t const last_index = data_in . states . size ( ) - 1 ;
for ( size_t i = 0 ; i < last_index ; i + + )
{
state_serialized & entry = data_in . states [ i ] ;
if ( entry . block_hash = = crypto : : null_hash ) entry . block_hash = m_blockchain . get_block_id_by_height ( entry . height ) ;
m_state_history . emplace_hint ( m_state_history . end ( ) , m_blockchain , std : : move ( entry ) ) ;
}
state_serialized & last_entry = data_in . states [ last_index ] ;
m_state = state_t ( m_blockchain , std : : move ( last_entry ) ) ;
}
2018-08-16 05:08:36 +02:00
}
2019-07-15 10:08:52 +02:00
MGINFO ( " Service node data loaded successfully, height: " < < m_state . height ) ;
MGINFO ( m_state . service_nodes_infos . size ( )
2019-08-27 05:28:46 +02:00
< < " nodes and " < < m_state_history . size ( ) < < " recent states loaded, " < < m_state_archive . size ( )
< < " historical states loaded, ( " < < tools : : get_human_readable_bytes ( bytes_loaded ) < < " ) " ) ;
2018-08-16 06:13:57 +02:00
LOG_PRINT_L1 ( " service_node_list::load() returning success " ) ;
2018-08-16 05:08:36 +02:00
return true ;
}
2019-07-15 10:08:52 +02:00
void service_node_list : : reset ( bool delete_db_entry )
2018-08-16 05:08:36 +02:00
{
2019-07-15 10:08:52 +02:00
m_state_history . clear ( ) ;
2019-08-02 03:07:37 +02:00
m_old_quorum_states . clear ( ) ;
2019-07-15 10:08:52 +02:00
m_state = { } ;
2018-08-16 05:08:36 +02:00
if ( m_db & & delete_db_entry )
{
2019-05-01 09:32:57 +02:00
cryptonote : : db_wtxn_guard txn_guard ( m_db ) ;
2018-08-16 05:08:36 +02:00
m_db - > clear_service_node_data ( ) ;
}
2018-10-17 02:07:04 +02:00
uint64_t hardfork_9_from_height = 0 ;
{
uint32_t window , votes , threshold ;
uint8_t voting ;
m_blockchain . get_hard_fork_voting_info ( 9 , window , votes , threshold , hardfork_9_from_height , voting ) ;
}
2019-09-02 05:20:05 +02:00
m_state . height = hardfork_9_from_height - 1 ;
2018-08-16 05:08:36 +02:00
}
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
size_t service_node_info : : total_num_locked_contributions ( ) const
{
size_t result = 0 ;
for ( service_node_info : : contributor_t const & contributor : this - > contributors )
result + = contributor . locked_contributions . size ( ) ;
return result ;
}
2019-03-13 06:35:02 +01:00
converted_registration_args convert_registration_args ( cryptonote : : network_type nettype ,
const std : : vector < std : : string > & args ,
uint64_t staking_requirement ,
2019-06-26 06:00:05 +02:00
uint8_t hf_version )
2018-07-21 03:27:13 +02:00
{
2019-03-13 06:35:02 +01:00
converted_registration_args result = { } ;
2018-08-16 07:14:28 +02:00
if ( args . size ( ) % 2 = = 0 | | args . size ( ) < 3 )
2018-08-03 07:17:15 +02:00
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Usage: <operator cut> <address> <fraction> [<address> <fraction> [...]]] " ) ;
return result ;
2018-08-03 07:17:15 +02:00
}
2019-03-13 06:35:02 +01:00
2018-08-16 07:14:28 +02:00
if ( ( args . size ( ) - 1 ) / 2 > MAX_NUMBER_OF_CONTRIBUTORS )
2018-07-31 04:46:12 +02:00
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Exceeds the maximum number of contributors, which is " ) + std : : to_string ( MAX_NUMBER_OF_CONTRIBUTORS ) ;
return result ;
2018-07-21 03:27:13 +02:00
}
2019-03-13 06:35:02 +01:00
2018-08-07 04:28:59 +02:00
try
{
2019-03-13 06:35:02 +01:00
result . portions_for_operator = boost : : lexical_cast < uint64_t > ( args [ 0 ] ) ;
if ( result . portions_for_operator > STAKING_PORTIONS )
2018-08-07 04:28:59 +02:00
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Invalid portion amount: " ) + args [ 0 ] + tr ( " . Must be between 0 and " ) + std : : to_string ( STAKING_PORTIONS ) ;
return result ;
2018-08-07 04:28:59 +02:00
}
}
catch ( const std : : exception & e )
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Invalid portion amount: " ) + args [ 0 ] + tr ( " . Must be between 0 and " ) + std : : to_string ( STAKING_PORTIONS ) ;
return result ;
2018-08-07 04:28:59 +02:00
}
2019-03-13 06:35:02 +01:00
2019-03-28 08:18:23 +01:00
struct addr_to_portion_t
{
cryptonote : : address_parse_info info ;
uint64_t portions ;
} ;
std : : vector < addr_to_portion_t > addr_to_portions ;
2019-03-13 06:35:02 +01:00
size_t const OPERATOR_ARG_INDEX = 1 ;
for ( size_t i = OPERATOR_ARG_INDEX , num_contributions = 0 ;
i < args . size ( ) ;
i + = 2 , + + num_contributions )
2018-07-21 03:27:13 +02:00
{
cryptonote : : address_parse_info info ;
if ( ! cryptonote : : get_account_address_from_str ( info , nettype , args [ i ] ) )
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Failed to parse address: " ) + args [ i ] ;
return result ;
2018-07-21 03:27:13 +02:00
}
if ( info . has_payment_id )
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Can't use a payment id for staking tx " ) ;
return result ;
2018-07-21 03:27:13 +02:00
}
2018-07-28 08:27:04 +02:00
2018-07-21 03:27:13 +02:00
if ( info . is_subaddress )
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Can't use a subaddress for staking tx " ) ;
return result ;
2018-07-21 03:27:13 +02:00
}
try
{
2018-08-18 06:36:16 +02:00
uint64_t num_portions = boost : : lexical_cast < uint64_t > ( args [ i + 1 ] ) ;
2019-03-28 08:18:23 +01:00
addr_to_portions . push_back ( { info , num_portions } ) ;
2018-07-21 03:27:13 +02:00
}
catch ( const std : : exception & e )
{
2019-03-13 06:35:02 +01:00
result . err_msg = tr ( " Invalid amount for contributor: " ) + args [ i ] + tr ( " , with portion amount that could not be converted to a number: " ) + args [ i + 1 ] ;
return result ;
2018-07-21 03:27:13 +02:00
}
}
2019-03-13 06:35:02 +01:00
2019-03-28 08:18:23 +01:00
//
// FIXME(doyle): FIXME(loki) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// This is temporary code to redistribute the insufficient portion dust
// amounts between contributors. It should be removed in HF12.
//
std : : array < uint64_t , MAX_NUMBER_OF_CONTRIBUTORS * service_nodes : : MAX_KEY_IMAGES_PER_CONTRIBUTOR > excess_portions ;
std : : array < uint64_t , MAX_NUMBER_OF_CONTRIBUTORS * service_nodes : : MAX_KEY_IMAGES_PER_CONTRIBUTOR > min_contributions ;
{
// NOTE: Calculate excess portions from each contributor
uint64_t loki_reserved = 0 ;
for ( size_t index = 0 ; index < addr_to_portions . size ( ) ; + + index )
{
addr_to_portion_t const & addr_to_portion = addr_to_portions [ index ] ;
uint64_t min_contribution_portions = service_nodes : : get_min_node_contribution_in_portions ( hf_version , staking_requirement , loki_reserved , index ) ;
uint64_t loki_amount = service_nodes : : portions_to_amount ( staking_requirement , addr_to_portion . portions ) ;
loki_reserved + = loki_amount ;
uint64_t excess = 0 ;
if ( addr_to_portion . portions > min_contribution_portions )
excess = addr_to_portion . portions - min_contribution_portions ;
min_contributions [ index ] = min_contribution_portions ;
excess_portions [ index ] = excess ;
}
}
uint64_t portions_left = STAKING_PORTIONS ;
uint64_t total_reserved = 0 ;
for ( size_t i = 0 ; i < addr_to_portions . size ( ) ; + + i )
{
addr_to_portion_t & addr_to_portion = addr_to_portions [ i ] ;
uint64_t min_portions = get_min_node_contribution_in_portions ( hf_version , staking_requirement , total_reserved , i ) ;
uint64_t portions_to_steal = 0 ;
if ( addr_to_portion . portions < min_portions )
{
// NOTE: Steal dust portions from other contributor if we fall below
// the minimum by a dust amount.
uint64_t needed = min_portions - addr_to_portion . portions ;
const uint64_t FUDGE_FACTOR = 10 ;
const uint64_t DUST_UNIT = ( STAKING_PORTIONS / staking_requirement ) ;
const uint64_t DUST = DUST_UNIT * FUDGE_FACTOR ;
if ( needed > DUST )
continue ;
for ( size_t sub_index = 0 ; sub_index < addr_to_portions . size ( ) ; sub_index + + )
{
if ( i = = sub_index ) continue ;
uint64_t & contributor_excess = excess_portions [ sub_index ] ;
if ( contributor_excess > 0 )
{
portions_to_steal = std : : min ( needed , contributor_excess ) ;
addr_to_portion . portions + = portions_to_steal ;
contributor_excess - = portions_to_steal ;
needed - = portions_to_steal ;
result . portions [ sub_index ] - = portions_to_steal ;
if ( needed = = 0 )
break ;
}
}
// NOTE: Operator is sending in the minimum amount and it falls below
// the minimum by dust, just increase the portions so it passes
if ( needed > 0 & & addr_to_portions . size ( ) < MAX_NUMBER_OF_CONTRIBUTORS * service_nodes : : MAX_KEY_IMAGES_PER_CONTRIBUTOR )
addr_to_portion . portions + = needed ;
}
2019-03-28 09:22:16 +01:00
if ( addr_to_portion . portions < min_portions | | ( addr_to_portion . portions - portions_to_steal ) > portions_left )
2019-03-28 08:18:23 +01:00
{
result . err_msg = tr ( " Invalid amount for contributor: " ) + args [ i ] + tr ( " , with portion amount: " ) + args [ i + 1 ] + tr ( " . The contributors must each have at least 25%, except for the last contributor which may have the remaining amount " ) ;
return result ;
}
if ( min_portions = = UINT64_MAX )
{
result . err_msg = tr ( " Too many contributors specified, you can only split a node with up to: " ) + std : : to_string ( MAX_NUMBER_OF_CONTRIBUTORS ) + tr ( " people. " ) ;
return result ;
}
portions_left - = addr_to_portion . portions ;
portions_left + = portions_to_steal ;
result . addresses . push_back ( addr_to_portion . info . address ) ;
result . portions . push_back ( addr_to_portion . portions ) ;
uint64_t loki_amount = service_nodes : : portions_to_amount ( addr_to_portion . portions , staking_requirement ) ;
total_reserved + = loki_amount ;
}
2019-03-13 06:35:02 +01:00
result . success = true ;
return result ;
2018-07-31 08:13:03 +02:00
}
2019-03-13 06:35:02 +01:00
bool make_registration_cmd ( cryptonote : : network_type nettype ,
2019-06-26 06:00:05 +02:00
uint8_t hf_version ,
2019-03-13 06:35:02 +01:00
uint64_t staking_requirement ,
const std : : vector < std : : string > & args ,
const crypto : : public_key & service_node_pubkey ,
const crypto : : secret_key & service_node_key ,
std : : string & cmd ,
bool make_friendly ,
boost : : optional < std : : string & > err_msg )
2018-07-31 08:13:03 +02:00
{
2019-03-13 06:35:02 +01:00
converted_registration_args converted_args = convert_registration_args ( nettype , args , staking_requirement , hf_version ) ;
if ( ! converted_args . success )
2018-07-31 08:13:03 +02:00
{
2019-03-13 06:35:02 +01:00
MERROR ( tr ( " Could not convert registration args, reason: " ) < < converted_args . err_msg ) ;
2018-07-31 08:13:03 +02:00
return false ;
}
Infinite Staking Part 2 (#406)
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
uint64_t exp_timestamp = time ( nullptr ) + STAKING_AUTHORIZATION_EXPIRATION_WINDOW ;
2018-07-31 08:13:03 +02:00
crypto : : hash hash ;
2019-03-13 06:35:02 +01:00
bool hashed = cryptonote : : get_registration_hash ( converted_args . addresses , converted_args . portions_for_operator , converted_args . portions , exp_timestamp , hash ) ;
2018-07-31 08:13:03 +02:00
if ( ! hashed )
{
2018-08-07 04:28:59 +02:00
MERROR ( tr ( " Could not make registration hash from addresses and portions " ) ) ;
2018-07-31 08:13:03 +02:00
return false ;
}
crypto : : signature signature ;
crypto : : generate_signature ( hash , service_node_pubkey , service_node_key , signature ) ;
std : : stringstream stream ;
if ( make_friendly )
{
stream < < tr ( " Run this command in the wallet that will fund this registration: \n \n " ) ;
}
stream < < " register_service_node " ;
for ( size_t i = 0 ; i < args . size ( ) ; + + i )
{
stream < < " " < < args [ i ] ;
}
stream < < " " < < exp_timestamp < < " " ;
stream < < epee : : string_tools : : pod_to_hex ( service_node_pubkey ) < < " " ;
2018-08-03 08:52:44 +02:00
stream < < epee : : string_tools : : pod_to_hex ( signature ) ;
2018-07-31 08:13:03 +02:00
if ( make_friendly )
{
stream < < " \n \n " ;
time_t tt = exp_timestamp ;
2018-12-11 07:42:06 +01:00
2018-07-31 08:13:03 +02:00
struct tm tm ;
2018-12-11 07:42:06 +01:00
epee : : misc_utils : : get_gmt_time ( tt , tm ) ;
2018-07-31 08:13:03 +02:00
char buffer [ 128 ] ;
strftime ( buffer , sizeof ( buffer ) , " %Y-%m-%d %I:%M:%S %p " , & tm ) ;
2018-08-16 07:14:28 +02:00
stream < < tr ( " This registration expires at " ) < < buffer < < tr ( " . \n " ) ;
2019-03-20 01:18:30 +01:00
stream < < tr ( " This should be in about 2 weeks, if it isn't, check this computer's clock. \n " ) ;
2018-07-31 08:13:03 +02:00
stream < < tr ( " Please submit your registration into the blockchain before this time or it will be invalid. " ) ;
}
cmd = stream . str ( ) ;
2018-07-21 03:27:13 +02:00
return true ;
}
2019-07-09 05:59:42 +02:00
2019-08-12 01:48:47 +02:00
bool service_node_info : : can_be_voted_on ( uint64_t height ) const
2019-07-09 05:59:42 +02:00
{
2019-08-12 01:48:47 +02:00
// If the SN expired and was reregistered since the height we'll be voting on it prematurely
if ( ! this - > is_fully_funded ( ) | | this - > registration_height > = height ) return false ;
if ( this - > is_decommissioned ( ) & & this - > last_decommission_height > = height ) return false ;
2019-08-15 08:03:32 +02:00
if ( this - > is_active ( ) )
{
// NOTE: This cast is safe. The definition of is_active() is that active_since_height >= 0
assert ( this - > active_since_height > = 0 ) ;
if ( static_cast < uint64_t > ( this - > active_since_height ) > = height ) return false ;
}
2019-08-12 01:48:47 +02:00
return true ;
}
bool service_node_info : : can_transition_to_state ( uint8_t hf_version , uint64_t height , new_state proposed_state ) const
{
2019-09-11 08:14:24 +02:00
if ( hf_version > = cryptonote : : network_version_13_enforce_checkpoints )
2019-08-12 08:40:04 +02:00
{
2019-09-11 08:14:24 +02:00
if ( ! can_be_voted_on ( height ) )
2019-08-12 08:40:04 +02:00
return false ;
2019-09-11 08:14:24 +02:00
if ( proposed_state = = new_state : : deregister )
{
if ( height < = this - > registration_height )
return false ;
}
else if ( proposed_state = = new_state : : ip_change_penalty )
{
if ( height < = this - > last_ip_change_height )
return false ;
}
if ( this - > is_decommissioned ( ) )
{
return proposed_state ! = new_state : : decommission & & proposed_state ! = new_state : : ip_change_penalty ;
}
return ( proposed_state ! = new_state : : recommission ) ;
2019-07-09 05:59:42 +02:00
}
else
{
2019-09-11 08:14:24 +02:00
if ( proposed_state = = new_state : : deregister )
{
if ( height < this - > registration_height ) return false ;
}
if ( this - > is_decommissioned ( ) )
{
return proposed_state ! = new_state : : decommission & & proposed_state ! = new_state : : ip_change_penalty ;
}
else
{
return ( proposed_state ! = new_state : : recommission ) ;
}
2019-07-09 05:59:42 +02:00
}
}
2018-06-29 06:47:00 +02:00
}
2018-07-21 03:27:13 +02:00