This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
Changes all boost mutexes, locks, and condition_variables to their stl
equivalents.
Changes all lock_guard/unique_lock/shared_lock to not specify the mutex
type (C++17), e.g.
std::lock_guard foo{mutex};
instead of
std::lock_guard<oh::um::what::mutex> foo{mutex};
Also changes some related boost::thread calls to std::thread, and some
related boost chrono calls to stl chrono.
boost::thread isn't changed here to std::thread because some of the
instances rely on some boost thread extensions.
High-level details:
This redesigns the RPC layer to make it much easier to work with,
decouples it from an embedded HTTP server, and gets the vast majority of
the RPC serialization and dispatch code out of a very commonly included
header.
There is unfortunately rather a lot of interconnected code here that
cannot be easily separated out into separate commits. The full details
of what happens here are as follows:
Major details:
- All of the RPC code is now in a `cryptonote::rpc` namespace; this
renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK
becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME`
becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already
working in the `rpc` namespace).
- `core_rpc_server` is now completely decoupled from providing any
request protocol: it is now *just* the core RPC call handler.
- The HTTP RPC interface now lives in a new rpc/http_server.h; this code
handles listening for HTTP requests and dispatching them to
core_rpc_server, then sending the results back to the caller.
- There is similarly a rpc/lmq_server.h for LMQ RPC code; more details
on this (and other LMQ specifics) below.
- RPC implementing code now returns the response object and throws when
things go wrong which simplifies much of the rpc error handling. They
can throw anything; generic exceptions get logged and a generic
"internal error" message gets returned to the caller, but there is
also an `rpc_error` class to return an error code and message used by
some json-rpc commands.
- RPC implementing functions now overload `core_rpc_server::invoke`
following the pattern:
RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context);
This overloading makes the code vastly simpler: all instantiations are
now done with a small amount of generic instantiation code in a single
.cpp rather than needing to go to hell and back with a nest of epee
macros in a core header.
- each RPC endpoint is now defined by the RPC types themselves,
including its accessible names and permissions, in
core_rpc_server_commands_defs.h:
- every RPC structure now has a static `names()` function that returns
the names by which the end point is accessible. (The first one is
the primary, the others are for deprecated aliases).
- RPC command wrappers define their permissions and type by inheriting
from special tag classes:
- rpc::RPC_COMMAND is a basic, admin-only, JSON command, available
via JSON RPC. *All* JSON commands are now available via JSON RPC,
instead of the previous mix of some being at /foo and others at
/json_rpc. (Ones that were previously at /foo are still there for
backwards compatibility; see `rpc::LEGACY` below).
- rpc::PUBLIC specifies that the command should be available via a
restricted RPC connection.
- rpc::BINARY specifies that the command is not JSON, but rather is
accessible as /name and takes and returns values in the magic epee
binary "portable storage" (lol) data format.
- rpc::LEGACY specifies that the command should be available via the
non-json-rpc interface at `/name` for backwards compatibility (in
addition to the JSON-RPC interface).
- some epee serialization got unwrapped and de-templatized so that it
can be moved into a .cpp file with just declarations in the .h. (This
makes a *huge* difference for core_rpc_server_commands_defs.h and for
every compilation unit that includes it which previously had to
compile all the serialization code and then throw all by one copy away
at link time). This required some new macros so as to not break a ton
of places that will use the old way putting everything in the headers;
The RPC code uses this as does a few other places; there are comments
in contrib/epee/include/serialization/keyvalue_serialization.h as to
how to use it.
- Detemplatized a bunch of epee/storages code. Most of it should have
have been using templates at all (because it can only ever be called
with one type!), and now it isn't. This broke some things that didn't
properly compile because of missing headers or (in one case) a messed
up circular dependency.
- Significantly simplified a bunch of over-templatized serialization
code.
- All RPC serialization definitions is now out of
core_rpc_server_commands_defs.h and into a single .cpp file
(core_rpc_server_commands_defs.cpp).
- core RPC no longer uses the disgusting
BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design
that forced slamming tons of code into a common header that didn't
need to be there.
- epee::struct_init is gone. It was a horrible hack that instiated
multiple templates just so the coder could be so lazy and write
`some_type var;` instead of properly value initializing with
`some_type var{};`.
- Removed a bunch of useless crap from epee. In particular, forcing
extra template instantiations all over the place in order to nest
return objects inside JSON RPC values is no longer needed, as are a
bunch of stuff related to the above de-macroization of the code.
- get_all_service_nodes, get_service_nodes, and get_n_service_nodes are
now combined into a single `get_service_nodes` (with deprecated
aliases for the others), which eliminates a fair amount of
duplication. The biggest obstacle here was getting the requested
fields reference passed through: this is now done by a new ability to
stash a context in the serialization object that can be retrieved by a
sub-serialized type.
LMQ-specifics:
- The LokiMQ instance moves into `cryptonote::core` rather than being
inside cryptonote_protocol. Currently the instance is used both for
qnet and rpc calls (and so needs to be in a common place), but I also
intend future PRs to use the batching code for job processing
(replacing the current threaded job queue).
- rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue.
Unlike http_server it isn't technically running the whole LMQ stack
from here, but the parallel name with http_server seemed appropriate.
- All RPC endpoints are supported by LMQ under the same names as defined
generically, but prefixed with `rpc.` for public commands and `admin.`
for restricted ones.
- service node keys are now always available, even when not running in
`--service-node` mode: this is because we want the x25519 key for
being able to offer CURVE encryption for lmq RPC end-points, and
because it doesn't hurt to have them available all the time. In the
RPC layer this is now called "get_service_keys" (with
"get_service_node_key" as an alias) since they aren't strictly only
for service nodes. This also means code needs to check
m_service_node, and not m_service_node_keys, to tell if it is running
as a service node. (This is also easier to notice because
m_service_node_keys got renamed to `m_service_keys`).
- Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and
`sub.mempool` subscribes the connection for new block and new mempool
TX notifications. The latter can notify on just blink txes, or all
new mempool txes (but only new ones -- txes dumped from a block don't
trigger it). The client gets pushed a [`notify.block`, `height`,
`hash`] or [`notify.tx`, `txhash`, `blob`] message when something
arrives.
Minor details:
- rpc::version_t is now a {major,minor} pair. Forcing everyone to pack
and unpack a uint32_t was gross.
- Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...).
(This immediately revealed a couple of bugs in the RPC code that was
assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because
the macro allows implicit conversion to a char).
- De-templatizing useless templates in epee (i.e. a bunch of templated
types that were never invoked with different types) revealed a painful
circular dependency between epee and non-epee code for tor_address and
i2p_address. This crap is now handled in a suitably named
`net/epee_network_address_hack.cpp` hack because it really isn't
trivial to extricate this mess.
- Removed `epee/include/serialization/serialize_base.h`. Amazingly the
code somehow still all works perfectly with this previously vital
header removed.
- Removed bitrotted, unused epee "crypted_storage" and
"gzipped_inmemstorage" code.
- Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with
LOKI_DEFERs. The epee version involves quite a bit more instantiation
and is ugly as sin. Also made the `loki::defer` class invokable for
some edge cases that need calling before destruction in particular
conditions.
- Moved the systemd code around; it makes much more sense to do the
systemd started notification as in daemon.cpp as late as possible
rather than in core (when we can still have startup failures, e.g. if
the RPC layer can't start).
- Made the systemd short status string available in the get_info RPC
(and no longer require building with systemd).
- during startup, print (only) the x25519 when not in SN mode, and
continue to print all three when in SN mode.
- DRYed out some RPC implementation code (such as set_limit)
- Made wallet_rpc stop using a raw m_wallet pointer
This reduces blink fees by half (from 5x to 2.5x base fee) at HF15, and
makes anything other than "unimportant" priority map to blink for
ordinary transactions.
For non-blink txes priorities are still accepted (so that, if the
mempool is clogged, you can still get a registration or stake through by
upping the priority).
It also updates the wallets default priority for transactions to blink
(that is, "default" now becomes blink).
With the priorities gone and default set to blink, the backlog-checking
code and automatic priority bumping code don't serve any useful purpose,
so this rips them out, along with a few other related code
simplifications.
cryptonote_protocol_handler calls `pool.get_blink(hash)` while already
holding a blink shared lock, which should have been
`pool.get_blink(hash, true)` to avoid `get_blink` trying to take its own
lock.
That double lock is undefined behaviour and can cause a deadlock on the
mutex, although it appears rare that it actually does. If it does,
however, this eventually backs up into vote relaying during the idle
loop, which then stalls the idle loop so we stop sending out uptime
proofs (since that is also in the idle loop).
A simple fix here is to add the `true` argument, but on reconsideration
this extra argument to take or not take a lock is messy and error prone,
so this commit instead removes the second argument entirely and instead
documents which call must and must not hold a lock, getting rid of the
three methods (get_blink, has_blink, and add_existing_blink) that had
the `have_lock` argument. This ends up having only a small impact on
calling code - the vast majority of callers already hold a lock, and the
few that don't are easily adjusted.
This replaces the horrible, horrible, badly misused templated
once_a_time_seconds and once_a_time_milliseconds with a `periodic_task`
that works the same way but takes parameters as constructor arguments
instead of template parameters.
It also makes various small improvements:
- uses std::chrono::steady_clock instead of ifdef'ing platform dependent
timer code.
- takes a std::chrono duration rather than a template integer and
scaling parameter.
- timers can be reset to trigger on the next invocation, and this is
thread-safe.
- timer intervals can be changed at run-time.
This all then gets used to reset the proof timer immediately upon
receiving a ping (initially or after expiring) from storage server and
lokinet so that we send proofs out faster.
Blockchain::prepare_handle_incoming_blocks locks m_tx_pool, but uses a
local RAII lock on the blockchain object itself, then also starts a
batch. Blockchain::cleanup_handle_incoming_blocks then also takes out a
local RAII blockchain lock, then cleans up the batch.
But the lmdb layer is retarded in that it throws an exception if any
thread tries to write to the database while a batch is active in another
thread, and so the blockchain lock is *also* used as a guard writes.
Holding an open batch but *not* holding the blockchain lock then hits
this exception if a write arrives in another thread at just the right
time.
This is, of course, terrible design at multiple layers, but this close
to release I am reluctant to make more drastic fixes.
Other small changes here:
- All the locks in `blockchain.cpp` now use tools::unique_lock or
tools::unique_locks rather than the nasty epee macro. This also
reduces the likelihood of accidental deadlock because this means the
dual txpool-blockchain locks are not taken out simultaneously via
std::lock rather than sequentially.
- Removed a completely useless "if (1)". Git blame shows that there was
previously a condition here, but apparently the upstream monero author
who changed it was afraid of removing the `if` keyword.
- Reduced the sleep in the loop that waits for a batch to 100ms from
1000ms because sleeping for a full second for a fairly light test is
insane.
- boost isn't happy calling boost::lock() on the tx pool or blockchain
object because the lock/unlock/try_lock methods are const, and so the
workaround of using boost::lock because std::lock and
std::shared_time_mutex are broken on the macOS SDK 10.11 that we use
for mac builds now requires extra workarounds. Joy.
The MacOSX 10.11 SDK we use is broken AF: it lies about supporting
C++14, but really only upgraded the headers but not the library itself,
so using std::shared_timed_mutex just results in a linking failure.
Upgrading the SDK is a huge pain (I tried, failed, and gave up), so for
now temporarily switch to boost::shared_mutex until we sort out the
macOS build disaster.
This adds support for bumping conflicting transactions from the mempool
upon receipt of a blink tx if those conflicting txs are not themselves
blinks.
This commit chops the handle_incoming_txs effectively in half into a
parse_incoming_txs phase, followed by a handle_incoming_txs phase which
is needed so that we can first parse the txes then, assuming they parse
successfully, can flag them as blink txes if blink info was included
with them, so that the actual insertion becomes forcing.
There is a stub here for also allowing a rollback of the mutable part of
the chain; implementation of actually doing that rollback will follow in
another commit.
To make it obvious (and keep it consistent) with all the other
blink-related methods having "blink" in their name.
Also fix grammar in a related debug message.
Extracts the tx removal lambda into a new method.
DRYs the nearly identical prune loop bodies (the only difference between
them is the iteration order).
Fix undefined behaviour in mempool pruning: on removal of a standard tx
the iterator was updated to the result of `.erase()`, but that's wrong
because the loop is going in backwards order, and so the first prune
would leave the iterator pointing at `.end()` which would be deferenced
in the next iteration. The (new, DRY) lambda now iterates in the
desired direction.
This also makes (and uses) blockchain and tx_pool implement the standard
library Lockable concept by adding a `try_lock()` so that they can be
locked simultaneously. It also updates them to hold a
boost::recursive_mutex directly rather than going through the pointless
and horribly designed epee `critical_section` wrapper (seriously who
thinks that making a mutex class copyable where a copy holds a new,
entirely unrelated mutex is a good idea?)
This makes get_blink_hashes_and_mined_heights filter out blinks in
immutable blocks; since we have to find them all here in order to do
that (and because this is regularly called) it's also a good place to
expire old blink entries.
- actually include the blink hashes in the core sync data
- fix cleanup to delete heights in (0, immutable] instead of [0,
immutable); we want to keep 0 because it signifies the mempool, and we
only need blocks after immutable, not immutable itself.
- fixed NOTIFY_REQUEST_GET_TXS to handle mempool txes properly (it was
only searching the chain and ignored missed_txs, but missed_txs are ones
we need to look up in the mempool)
- Add a method to tx_pool (needed for the above) to grab multiple txes
by hash (essentially a multi-tx version of `get_transaction()`), and
change get_transaction() to use it with a single value.
- Added various debugging statements
- Added a bunch of comments to each condition of the preliminary blink
data check condition.
- Don't abort blink addition on a single signature failure: if there are
enough valid signatures we should still accept it.
- Check for blink signature approval when receiving blink signatures;
it's not enough to know that all were added successfully, we also have
to ask the blink tx if it is approved (which does additional checks on
subquorum counts) once we add them all.
This adds blink tx synchronization: when doing a periodic sync with
other nodes each node includes a map of {HEIGHT, HASH} pairs where
HEIGHT is a mined, post-immutable block height and HASH is an xor of all
the tx hashes mined into that block.
If upon receipt the node disagrees about what it thinks the HASH should
be, it can request a list of txes for one or more disagreeing heights,
for which it gets a list of tx hashes of all blink txes mined into that
block.
If it is then missing any of those TXes, this adds an ability to request
that the remote send TXes via the existing NOTIFY_NEW_TRANSACTIONS
mechanism, but with an added flag (so that these don't get relayed).
This originally was going to request the TXes via the existing
NOTIFY_REQUEST_GET_OBJECTS, which has a `txs` field, but unfortunately
any txs passed to it are completely ignored; it is *only* usable for
block synchronization. As such I renamed it and removed the `txs` field
to make the responsibility/capability clearer.
This adds the ability for check_fee() to also check the burn amount.
This requires passing extra info through `add_tx()` (and the various
things that call it), so I took the:
bool keeped_by_block, bool relayed, bool do_not_relay
argument triplet, moved it into a struct in tx_pool.h, then added the other fee
options there (along with some static factory functions for generating the
typical sets of option).
The majority of this commit is chasing that change through the codebase and
test suite.
This is used by blink but should also help LNS and other future burn
transactions to verify a burn amount simply when adding the transation to the
mempool. It supports a fixed burn amount, a burn amount as a multiple of the
minimum tx fee, and also allows you to increase the minimum tx fee (so that,
for example, we could require blink txes to pay miners 250% of the usual
minimum (unimportant) priority tx fee.
- Removed a useless core::add_new_tx() overload that wasn't used anywhere.
Blink-specific changes:
(I'd normally separate these into a separate commit, but they got interwoven
fairly heavily with the above change).
- changed the way blink burning is specified so that we have three knobs for
fee adjustment (fixed burn fee; base fee multiple; and required miner tx fee).
The fixed amount is currently 0, base fee is 400%, and require miner tx fee is
simply 100% (i.e. no different than a normal transaction). This is the same as
before this commit, but is changing how they are being specified in
cryptonote_config.h.
- blink tx fee, burn amount, and miner tx fee (if > 100%) now get checked
before signing a blink tx. (These fee checks don't apply to anyone else --
when propagating over the network only the miner tx fee is checked).
- Added a couple of checks for blink quorums: 1) make sure they have reached
the blink hf; 2) make sure the submitted tx version conforms to the current hf
min/max tx version.
- print blink fee information in simplewallet's `fee` output
- add "typical" fee calculations in the `fee` output:
[wallet T6SCwL (has locked stakes)]: fee
Current fee is 0.000000850 loki per byte + 0.020000000 loki per output
No backlog at priority 1
No backlog at priority 2
No backlog at priority 3
No backlog at priority 4
Current blink fee is 0.000004250 loki per byte + 0.100000000 loki per output
Estimated typical small transaction fees: 0.042125000 (unimportant), 0.210625000 (normal), 1.053125000 (elevated), 5.265625000 (priority), 0.210625000 (blink)
where "small" here is the same tx size (2500 bytes + 2 outputs) used to
estimate backlogs.
- Adds blink signature synchronization and storage through the regular
p2p network
- Adds wallet support (though this is still currently buggy and needs
additional fixes - it sees the tx when it arrives in the mempool but
isn't properly updating when the blink tx gets mined.)
Pool printing is separately implemented in rpc_command_executor; this
method in tx_pool is both obsolete (missing some fields) and not called
anywhere.
This is the bulk of the work for blink. There is two pieces yet to come
which will follow shortly, which are: the p2p communication of blink
transactions (which needs to be fully synchronized, not just shared,
unlike regular mempool txes); and an implementation of fee burning.
Blink approval, multi-quorum signing, cli wallet and node support for
submission denial are all implemented here.
This overhauls and fixes various parts of the SNNetwork interface to fix
some issues (particularly around non-SN communication with SNs, which
wasn't working).
There are also a few sundry FIXME's and TODO's of other minor details
that will follow shortly under cleanup/testing/etc.
This code references m_tx_pool.m_transactions which got removed in
Monero 11, so apparently no one has noticed or tested anything here
since then (despite adding new commits into it).
Neither of these have a place in modern C++11; boost::value_initialized
is entirely superseded by `Type var{};` which does value initialization
(or default construction if a default constructor is defined). More
problematically, each `boost::value_initialized<T>` requires
instantiation of another wrapping templated type which is a pointless
price to pay the compiler in C++11 or newer.
Also removed is the AUTO_VAL_INIT macro (which is just a simple macro
around constructing a boost::value_initialized<T>).
BOOST_FOREACH is a similarly massive pile of code to implement
C++11-style for-each loops. (And bizarrely it *doesn't* appear to fall
back to C++ for-each loops even when under a C++11 compiler!)
This removes both entirely from the codebase.
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
* Update state transition check to account for height and universally set timestamp on recommission
Reject invalidated state changes by their height after HF13
* Prune invalidated state changes on blockchain increment
Simplify check_tx_inputs for state_changes by using service node list
Instead of querying the last 60 historical blocks for every transaction,
use the service node list and determine the state of the service node
and if it can transition to its new state.
We also now enforce at hardfork 13, that the network cannot commit
transactions to the network if they would have been invalidated by
a newer state change that already is already on the blockchain.
This is backwards compatible all the way back to hardfork 9.
Greatly simplify state change tx pruning on block added
Use the new stricter rules for pruning state changes in the txpool
We can do so because pruning the TX pool won't cause issues at the
protocol level, but the more people we can upgrade the better network
behaviour we get in terms of propagating more intrinsically correct
ordering of state changes to other peers.
* Don't generate state changes if not valid, disallow voting if node is non-votable
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
* Remove dead branches in hot-path check_tx_inputs
Also renames #define for mixins to better match naming convention
* Shuffle around some more code into common branches
* Fix min/max tx version rules, since there 1 tx v2 on v9 fork
* First draft infinite staking implementation
* Actually generate the right key image and expire appropriately
* Add framework to lock key images after expiry
* Return locked key images for nodes, add request unlock option
* Introduce transaction types for key image unlock
* Update validation steps to accept tx types, key_image_unlock
* Add mapping for lockable key images to amounts
* Change inconsistent naming scheme of contributors
* Create key image unlock transaction type and process it
* Update tx params to allow v4 types and as a result construct_tx*
* Fix some serialisation issues not sending all the information
* Fix dupe tx extra tag causing incorrect deserialisation
* Add warning comments
* Fix key image unlocks parsing error
* Simplify key image proof checks
* Fix rebase errors
* Correctly calculate the key image unlock times
* Blacklist key image on deregistration
* Serialise key image blacklist
* Rollback blacklisted key images
* Fix expiry logic error
* Disallow requesting stake unlock if already unlocked client side
* Add double spend checks for key image unlocks
* Rename get_staking_requirement_lock_blocks
To staking_initial_num_lock_blocks
* Begin modifying output selection to not use locked outputs
* Modify output selection to avoid locked/blacklisted key images
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add additional test, fix assert
* Remove debug code in wallet
* Fix merge dev problem