This updates the coinbase transactions to reward service nodes
periodically rather than every block. If you recieve a service node
reward this reward will be delayed x blocks, if you receive another
reward to the same wallet before those blocks have been completed it
will be added to your total and all will be paid out after those x
blocks has passed.
For example if our batching interval is 2 blocks:
Block 1 - Address A receives reward of 10 oxen - added to batch
Block 2 - Address A receives reward of 10 oxen - added to batch
Block 3 - Address A is paid out 20 oxen.
Batching accumulates a small reward for all nodes every block
The batching of service node rewards allows us to drip feed rewards
to service nodes. Rather than accruing each service node 16.5 oxen every
time they are pulse block leader we now reward every node the 16.5 /
num_service_nodes every block and pay each wallet the full amount that
has been accrued after a period of time (Likely 3.5 days).
To spread each payment evenly we now pay the rewards based on the
address of the recipient. This modulus of their address determines which
block the address should be paid and by setting the interval to our
service_node_batching interval we can guarantee they will be paid out
regularly and evenly distribute the payments for all wallets over this
Snode revisions are a secondary version that let us put out a mandatory
update for snodes that isn't a hardfork (and so isn't mandatory for
wallets/exchanges/etc.).
The main point of this is to let us make a 9.2.0 release that includes
new mandatory minimums of future versions of storage server (2.2.0) and
lokinet (0.9.4) to bring upgrades to the network.
This slightly changes the HF7 blocks to 0 (instead of 1) because,
apparently, we weren't properly checking the HF value of the
pre-first-hf genesis block at all before. (In practice this changes
nothing because genesis blocks are v7 anyway).
This also changes (slightly) how we check for hard forks: now if we skip
some hard forks then we still want to know the height when a hard fork
triggers. For example, if the hf tables contains {7,14} then we still
need to know that the HF14 block height also is the height that
activates HF9, 10, etc.
random sampling of service nodes, call timestamp lmq message
checks timestamp of 5 service nodes, if local time is 30 seconds different from 80% of the nodes tested then warn user
tracks external timesync status and timestamp participation of service nodes
clean up includes
new template struct for participation history, individual types for participation entry
refactor checking participation
update select_randomly, move the testing for variance overflow
version locks, bump to 8.1.5
explicit casting for mac & clang
note to remove after hard fork
timestamp debugging log messages
debugging messages for before timesync - before message sent
logging errord with compiling
print version and change add_command to add_request_command
log if statement test
std::to_string replaced with tools::view_guts for x25519 key
check if my sn is active before sending timestamp requests
logging the failures
checking if statement for success of message
more logging, if guards arn't passing
more logging, successfully tests if service node might be out of sync
more tests before we decide we are out of sync
logging output if sn isn't passing tests
if check_participation fails then disconnect
print timestamp status
remove saving variance from the participation history
reduce MIN_TIME_IN_S_BEFORE_VOTING
reset participation history on recommission
undo reduction in startup time
reduce log levels
Set hardfork time in testnet
Very annoyingly having equal timestamps just ignored the fork because
the `return false` error wasn't caught anywhere. Allow equal timestamps
between subsequent works, and throw an exception on such errors so that
bad fork data is an error rather than just ignored.
Replace stagenet parameters with new devnet parameters:
- Wallet prefix is now dV[123] for regular wallets, dV[ABC] for
integrated addresses, and dV[abc] for subaddresses. Wallet length is
the same as testnet (i.e. 2 longer than mainnet).
- devnet has a single v7 genesis block (because change this is too
intrusive) and then immediately jumps to v15 at height 2.
- Moved default ports from 3805x to 3885x (to avoid conflicting with
potential existing stagenet daemons)
- DEVNET staking requirement is 1/10 of the mainnet requirement
beginning at height 600000. i.e. devnet staking requirement at height
1234 == 1/10 of mainnet requirement at height 601234. (This makes
lets devnet test out the decreasing stake, which testnet's fixed
requirement has never allowed).
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
- Fix assert to use version_t::_count in service_node_list.cpp
- Incoporate staking changes from LNS which revamp construct tx to
derive more of the parameters from hf_version and type. This removes
extraneous parameters that can be derived elsewhere.
Also delegate setting loki_construct_tx_params into wallet2 atleast,
so that callers into the wallet logic, like rpc server, simplewallet
don't need bend backwards to get the HF version whereas the wallet has
dedicated functions for determining the HF.
This adds vote relaying via quorumnet.
- The main glue between existing code and quorumnet code is in
cryptonote_protocol/quorumnet.{h,cpp}
- Uses function pointers assigned at initialization to call into the
glue without making cn_core depend on p2p
- Adds ed25519 and quorumnet port to uptime proofs and serialization.
- Added a second uptime proof signature using the ed25519 key to that
SNs are proving they actually have it (otherwise they could spoof
someone else's pubkey).
- Adds quorumnet port, defaulting to zmq rpc port + 1. quorumnet
listens on the p2p IPv4 address (but broadcasts the `public_ip` to the
network).
- Derives x25519 when received or deserialized.
- Converted service_info_info::version_t into a scoped enum (with the
same underlying type).
- Added contribution_t::version_t scoped enum. It was using
service_info_info::version for a 0 value which seemed rather wrong.
Random small details:
- Renamed internal `arg_sn_bind_port` for consistency
- Moved default stagenet ports from 38153-38155 to 38056-38058, and add the default stagenet
quorumnet port at 38059 (this keeps the range contiguous; otherwise the next stagenet default port
would collide with the testnet p2p port).
When processing a quorum for a block, if you are not in the quorum to
validate other nodes, check if you're a node that is going to be tested.
If you are, check based on your current data if you're potentially
a candidate to be decommissioned/deregistered and if so report it to the
console log.
Note that this is only a heuristic and ultimately the decision lies on
what the the other Service Nodes perceive the current state of your node
is (i.e. if they're acting malicious then you will be deregistered
irrespectively).
* print_quorum_state displays checkpointing quorums, remove batched call
* Update the help text for quorum commands
* <= for the max_quorum_type not <
* Handle heights greater than the latest
* Don't repeatedly add partially filled quorums
* Revert get_quorum_state to take a range (common case)
* Update the help text from print_quorum_state
The db txn in add_block ending caused the entire overarching
batch txn to stop.
Also add a new guard class so a db txn can be stopped in the
face of exceptions.
Also use a read only db txn in init when the db itself is
read only, and do not save the max tx size in that case.