Add migratory code for new alt_block_data_t.checkpointed field
Code review
Redo switch to alt chain logic to handle PoW properly
We always switch to the chain with prevailing checkpoints. If an old
chain reappears with more checkpoints it will be rebroadcasted and at
that point we will switch over then. So here we restore the old
behaviour regarding keeping alt chains around or not depending on the
scenario that caused the chain switch
Remove using crypto, using cryptonote, using epee in chaingen
Add loki prefix to data structures, cleanup warts
Remove the need for get_static_keys(), remove useless functions and
attempt to remove some pointless function overloading that puts too many
levels of indirection and makes it harder to follow code execution.
Rehaul tests further, persist test generator when replaying through core
The class used to generate the testing events to replay through core is
now preserved. This subtle change simplifies a big chunk of core testing
in that we can remove the need for using callbacks to track events in
the events vector.
Add preliminary checkpoint/alt service node tests
Allow inlining of fail cases for test conditions
Add more checkpoint core tests
add_block in generator is parameterised with checkpoints
Add test to check chain with equal checkpoints
Add test for chain reorging when recieving enough votes for checkpoint
Move service node list methods to state_t methods
Add querying state from alt blocks and put key image parsing into function
Incorporate hash into state_t to find alt states
Add a way to query alternate testing quorums
Rebase cleanup
This switches loki 5.x to use a fee formula of
SIZE * PER_BYTE + OUTPUTS * PER_OUTPUT
where we reduce the PER_BYTE fee back to what it was in 3.x; and with
the PER_OUTPUT fee set to 0.02 LOKI. This compares to the 4.x fee of:
SIZE * PER_BYTE * 80
(the *80 multiple was introduced in 4.x).
It also reduces the multiplier for the maximum priority level to 125
instead of 1000 because 1000 produced uselessly high tx fees. The new
multipliers go up 5x at each level: {1, 5, 25, 125} while previously
they went {1, 5, 25, 1000}.
As for the base change: we added the *80 multiplier in 4.x because we
wanted to make a theoretical de-anonymizing tx spam attack more costly.
The unanticipated consequence was that we also made *large* transactions
(such as sweeps) considerably more costly despite the fact that these
transactions typically only create 2 outputs.
This better captures what we meant to do in 4.x (making output creation
relatively more expensive) without making large txes (e.g. sweeps
required for staking) highly expensive.
The end effect is that the fee for a minimum-sized, 1-input/2-output
transaction should stay roughly the same (slightly over 0.04 LOKI),
while a 100-input/2-output transction (a typical spend or sweep from a
wallet with lots of smaller rewards) will drop in fee by somewhere
around 95%.
The most efficient theoretical deanonymizing tx spamming of this sort
was a 1-input/16-output transaction which will become about 2.5x as
expensive as currently the case in v4.x.
This removes a bunch of macros from cryptonote_config.h that either
aren't used at all, or have never applied to Loki, along with
removing/simplifying some of the dead code touching these macros.
A few of these are still used only in the test suite, so I moved them
there instead.
One of these sounds sort of scary -- ALLOW_DEBUG_COMMANDS -- but this
has *always* been forcibly enabled with no way to disable it going back
to the very first monero git commit.
DYNAMIC_FEE_PER_KB_BASE_FEE_V5 was defined in a very strange way that
doesn't make a lot of sense (including using a constant that is not
otherwise applied in loki), so I just replaced it with the expanded
value.
* Enforce minimum storage server version since hardfork 13
* Send Storage Server version as a triplet
* Send Storage Server version as a triplet of numbers, not strings
* Enforce Storage Server version immediately
only_serialize_quorums should set only_store_quorums flag
Simplify detach logic since we don't need constants anymore
Move state storage comment to ::store
Fix off by one in detach on recent states
If we are re-deriving states we still also need the historical quorums
such that we can process incoming blocks with state changes. Without it,
state changes will be ignored and skipped causing inconsistent state in
the service node list.
This mandates a rescan and stores the most recent in state_ts, and only
just quorums for states preceeding 10k intervals. We can no longer store
the 1st most recent state in the DB as that will be missing quorums as
well for similar reason when rederiving on start-up.
When a node gets recommissioned in 4.0.4 we reset its timestamp to the
current time to delay obligations checks for newly recommissioned nodes,
but this reset caused problems:
- the code runs not only when a fresh block is received, but also when
syncing or rescanning, and so time(NULL) gets used to update the
node's timestamp even if it is an old record, and since proof info is
shared across states, the affects the current state.
- as a result of the above, a just-rescanned node that has been
decommissioned at some point in the past will think it has just sent a
proof, and so won't send any proofs for an hour.
- A just-rescanned node won't accept or relay any proofs for any node
that was recommissioned in its scan for the first half an hour, but this
lack of relaying can cause chaos in getting uptime proofs out across the
network, especially while we still have 4.0.3 nodes that need it.
To address the first issue, this switches the recommissioning to use the
block timestamp rather than the current timestamp. This *will* be
slightly delayed in the case of current blocks (since a block timestamp
is the time the pool *started* working on the block, which is generally
the time the previous block was found on the network), but even with an
exceptionally long block delay (e.g. 20 minutes) we are still fending
off obligations checks for 1h40m.
That would partially fix issues 2 and 3, but we actually don't want a
recommissioning to look like a received uptime proof for a couple
reasons:
- When we haven't actually received an uptime proof it's confusing to
report that we have (at the recommission time) and may mask an
underlying issue of a node that isn't actually sending proofs for some
reason (which might be more common for a node that has just been
decommissioned/recommissioned). There's also a related weird state
here for nodes that have come on recently: they think the SN is
active, but have 0's for IP and storage server port.
- 4.0.3 nodes don't get the updated timestamp and so really need the
proof to come through even when the 4.0.4 nodes don't think it's
important/acceptable.
So to also fix these, this commits adds an "effective_timestamp" to the
proof info: if it is larger than the actual timestamp field, we use it
instead of the actual one for obligations checking. On a recommission,
we update only the effective field so that we can delay obligations
checking for a couple of hours without delaying actual proofs info going
over the network.
If we are re-deriving states we still also need the historical quorums
such that we can process incoming blocks with state changes. Without it,
state changes will be ignored and skipped causing inconsistent state in
the service node list.
This mandates a rescan and stores the most recent in state_ts, and only
just quorums for states preceeding 10k intervals. We can no longer store
the 1st most recent state in the DB as that will be missing quorums as
well for similar reason when rederiving on start-up.
We can't delete all state change TX's on blockchain increment incase the
state change came from an alternative block. We must keep them around
even if they would conflict with another state change because if we
switch chains, they need to be present for the block to still be
considered valid.
Add a .size() check when checking for is_migrated_from_v403 because when
reinitialise is true, m_state_history does not necessarily have any
elements (in tests atleast).
* Don't relay service node votes or uptime proof if synchronising
* Only relay votes if state is > state_synchronizing
Not before. Handshake = no, synchronizing = no.
* Relay votes/uptime to all nodes including those on I2P/TOR.
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
This commit makes various simplifications and optimizations, mainly in
the service node list code.
All in all, this shaves about 5% of the CPU time used for re-processing
the entire service node list.
In particular:
- changed m_state_history from a std::vector to a std::set that sorts on
height. This is responsible for the bulk of the CPU reduction by
significant reducing the amount of work required for checkpoint
culling, which has to shuffle a lot of `state_t`s around when removing
from the midde of a vector.
- the above also allows replacing the binary-search `std::lower_bound`
complexity with a much simpler `find()`.
- since the history is now always sorted, removed the error messages
related to sorting that either can't happen (on store) or don't matter
(on load).
- Added some converting constructors to simplify code (for example, a
`state_t` can now be very usefully constructed from an r-value
`state_serialized`).
- Many construct + moves (and a couple construct + copy) were replaced
with in-place constructions.
- removed some unused variables
Ordinarily on shut down we would discard the quorum states that are
converted and serialised once on initial start up with the new software.
If this is lost, we would not be able to process incoming votes or any
state changes and essentially halt without forcing a way to rescan the
service node list back to a consistent state.
* Update state transition check to account for height and universally set timestamp on recommission
Reject invalidated state changes by their height after HF13
* Prune invalidated state changes on blockchain increment
Simplify check_tx_inputs for state_changes by using service node list
Instead of querying the last 60 historical blocks for every transaction,
use the service node list and determine the state of the service node
and if it can transition to its new state.
We also now enforce at hardfork 13, that the network cannot commit
transactions to the network if they would have been invalidated by
a newer state change that already is already on the blockchain.
This is backwards compatible all the way back to hardfork 9.
Greatly simplify state change tx pruning on block added
Use the new stricter rules for pruning state changes in the txpool
We can do so because pruning the TX pool won't cause issues at the
protocol level, but the more people we can upgrade the better network
behaviour we get in terms of propagating more intrinsically correct
ordering of state changes to other peers.
* Don't generate state changes if not valid, disallow voting if node is non-votable