Neither of these have a place in modern C++11; boost::value_initialized
is entirely superseded by `Type var{};` which does value initialization
(or default construction if a default constructor is defined). More
problematically, each `boost::value_initialized<T>` requires
instantiation of another wrapping templated type which is a pointless
price to pay the compiler in C++11 or newer.
Also removed is the AUTO_VAL_INIT macro (which is just a simple macro
around constructing a boost::value_initialized<T>).
BOOST_FOREACH is a similarly massive pile of code to implement
C++11-style for-each loops. (And bizarrely it *doesn't* appear to fall
back to C++ for-each loops even when under a C++11 compiler!)
This removes both entirely from the codebase.
This adds a tx extra field that specifies an amount of the tx fee that
must be burned; the miner can claim only (txnFee - burnFee) when
including the block.
This will be used for the extra, burned part of blink fees and LNS fees
and any other transaction that requires fee burning in the future.
This allows flushing internal caches (for now, the bad tx cache,
which will allow debugging a stuck monerod after it has failed to
verify a transaction in a block, since it would otherwise not try
again, making subsequent log changes pointless)
* Fix elapsed time in storage server warning message
This was passing the time value rather than the number of seconds so
basically always printed "a long time" instead of the elapsed time.
* Removed pre-HF12 code
* Wait for storage server before sending proof
On startup we send a proof immediately, but this is misleading to the
operator - they may see a proof broadcast over the network suggesting
everything is good even when the storage server is still down. This
makes lokid wait for a first ping before the first proof so that the
user doesn't get mislead.
It also adds the storage server last ping into the `lokid status`
message, such as:
Height: 375634/375634 (100.0%) on mainnet, not mining, net hash 63.00 MH/s, v12, up to date, 1(out)+0(in) connections, uptime 0d 0h 0m 4s
SN: f4558f60b1c4075e469a15411f12d5a747c1c62b44bcbc8523f1a90becc80475 not registered, s.server: NO PING RECEIVED
or:
Height: 375663/375663 (100.0%) on mainnet, not mining, net hash 76.17 MH/s, v12, up to date, 1(out)+0(in) connections, uptime 0d 0h 0m 32s
SN: f4558f60b1c4075e469a15411f12d5a747c1c62b44bcbc8523f1a90becc80475 not registered, s.server: last ping 11 seconds ago
This generates a ed25519 keypair (and from it derives a x25519 keypair)
and broadcasts the ed25519 pubkey in HF13 uptime proofs.
This auxiliary key will be used both inside lokid (starting in HF14) in
places like the upcoming quorumnet code where we need a standard
pub/priv keypair that is usable in external tools (e.g. sodium) without
having to reimplement the incompatible (though still 25519-based) Monero
pubkey format.
This pulls it back into HF13 from the quorumnet code because the
generation code is ready now, and because there may be opportunities to
use this outside of lokid (e.g. in the storage server and in lokinet)
before HF14. Broadcasting it earlier also allows us to be ready to go
as soon as HF14 hits rather than having to wait for every node to have
sent a post-HF14 block uptime proof.
For a similar reason this adds a placeholder for the quorumnet port in
the uptime proof: currently the value is just set to 0 and ignored, but
allowing it to be passed will allow upgraded loki 6.x nodes to start
sending it to each other without having to wait for the fork height so
that they can start using it immediately when HF14 begins.
This puts the SN pubkey/privkey into a single struct (which have
ed25519/x25519 keys added to it in the future), which simplifies various
places to just pass the struct rather than storing and passing the
pubkey and privkey separately.
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
2cd4fd8 Changed the use of boost:value_initialized for C++ list initializer (JesusRami)
4ad191f Removed unused boost/value_init header (whyamiroot)
928f4be Make null hash constants constexpr (whyamiroot)
If a peer views the destination peer as not synchronizing, then the
destination peer should just accept the uptime proof, rather than accept
it and then conditionally relay it depending on whether or not you are
synchronizing at the point of attempting to relay (which you could
transition into synchronizing state interim between accepting and
attempting to relay the proof).
Otherwise we get into a ping-pong situation as follows
Node1 sends uptime ->
Node2 receives uptime and relays it back to Node1 for acknowledgement ->
Node1 receives it, handle_uptime_proof returns true to acknowledge ->
Node1 tries to resend to the same peers again
Instead, if we receive our own uptime proof, then acknowledge but don't
send on. If the we are missing an uptime proof it will have been
submitted automatically by the daemon itself instead of using my own
proof relayed by other nodes.
Move service node list methods to state_t methods
Add querying state from alt blocks and put key image parsing into function
Incorporate hash into state_t to find alt states
Add a way to query alternate testing quorums
Rebase cleanup
This removes a bunch of macros from cryptonote_config.h that either
aren't used at all, or have never applied to Loki, along with
removing/simplifying some of the dead code touching these macros.
A few of these are still used only in the test suite, so I moved them
there instead.
One of these sounds sort of scary -- ALLOW_DEBUG_COMMANDS -- but this
has *always* been forcibly enabled with no way to disable it going back
to the very first monero git commit.
DYNAMIC_FEE_PER_KB_BASE_FEE_V5 was defined in a very strange way that
doesn't make a lot of sense (including using a constant that is not
otherwise applied in loki), so I just replaced it with the expanded
value.
* Don't relay service node votes or uptime proof if synchronising
* Only relay votes if state is > state_synchronizing
Not before. Handshake = no, synchronizing = no.
* Relay votes/uptime to all nodes including those on I2P/TOR.
This converts the stored service_node_info value into a
`shared_ptr<const service_node_info>` rather than a plain
`service_node_info`. This yields a huge performance benefit by
significantly eliminating the vast majority of service_node_info
construction, destruction, and copying.
Most of the time when we copy a service_node_info nothing in it has
changed, which means we're storing exactly the same thing; this means an
extra construction for every SN info on every block *and* an extra
destruction when we cull old stored history. By using a
shared_ptr, the vast majority of those constructions and destructions
are eliminated.
The immediately previous commit (upon which this one builds) already
reduced a full rescan from 180s to 171s; this commit further reduces
that time to 104s, or about 42% reduced from the rescan time required
before this pair of commits. (All timings are from the dev.lokinet.org
box, tested over multiple runs with the entire lmdb cached in memory).
With the shared_ptr approach, we only make a copy when a change is
actually needed: because of infrequent (at the per-SN level) events like
a state_change, received reward, contribution, etc. The contained
reference is deliberately `const` so that values are not changeable;
there's a new function that does an explicit copying duplication,
returning the new non-const and storing the const ref in the shared
pointer.
Related to this is a small change (and fix) to how proof info and
public_ip/storage_port are stored: rather than store the values in the
service_node_info struct itself, they now gets stored in a shared_ptr
inside the service_node_info that intentionally gets shared among all
copies of the service_node_info (that is, a SN info copy deliberately
copies the pointer rather than the values). This also moves the ip/port
values into the proof struct, since that seemed much easier than
maintaining a separate shared_ptr for each value.
Previously, because these were stored as values in the service_node_info
they would actually get rolled back in the event of a reorg, but that
seems highly undesirable: you would end up rolling back to the old
values of the uptime proof and ip address (for example), but that should
not happen: those values are not dependent on the blockchain and so
should not be affected by a reorg/rollback. With this change they
aren't since there is only one actual proof stored.
Note that the shared storage here only applies to in-memory states;
states loaded from the db will still be duplicated.
This commit makes various simplifications and optimizations, mainly in
the service node list code.
All in all, this shaves about 5% of the CPU time used for re-processing
the entire service node list.
In particular:
- changed m_state_history from a std::vector to a std::set that sorts on
height. This is responsible for the bulk of the CPU reduction by
significant reducing the amount of work required for checkpoint
culling, which has to shuffle a lot of `state_t`s around when removing
from the midde of a vector.
- the above also allows replacing the binary-search `std::lower_bound`
complexity with a much simpler `find()`.
- since the history is now always sorted, removed the error messages
related to sorting that either can't happen (on store) or don't matter
(on load).
- Added some converting constructors to simplify code (for example, a
`state_t` can now be very usefully constructed from an r-value
`state_serialized`).
- Many construct + moves (and a couple construct + copy) were replaced
with in-place constructions.
- removed some unused variables
* Update state transition check to account for height and universally set timestamp on recommission
Reject invalidated state changes by their height after HF13
* Prune invalidated state changes on blockchain increment
Simplify check_tx_inputs for state_changes by using service node list
Instead of querying the last 60 historical blocks for every transaction,
use the service node list and determine the state of the service node
and if it can transition to its new state.
We also now enforce at hardfork 13, that the network cannot commit
transactions to the network if they would have been invalidated by
a newer state change that already is already on the blockchain.
This is backwards compatible all the way back to hardfork 9.
Greatly simplify state change tx pruning on block added
Use the new stricter rules for pruning state changes in the txpool
We can do so because pruning the TX pool won't cause issues at the
protocol level, but the more people we can upgrade the better network
behaviour we get in terms of propagating more intrinsically correct
ordering of state changes to other peers.
* Don't generate state changes if not valid, disallow voting if node is non-votable
* Add disabled-by-default quorum storage support
This adds support for storing N blocks of recent expired quorum state
history in lokid (and dumping to/from the lmdb).
This isn't useful for regular nodes (and that's why we don't store it)
but is incredibly useful for the block explorer to be able to report
*which* node got deregistered/decommissioned/etc. by a given
state_change tx.
* Fix quorum states copy
* Add soft forking for checkpointing on mainnet
* Clear checkpoints on softfork on mainnet
* Move softfork date until we're ready, fix test results and modulo addition
* Only round up delete_height if it's not a multiple of CHECKPOINT_INTERVAL
* Remove unused variable soft fork in service_node_rules (replaced by cryptonote_config)
Update the way votes are relayed to fix superfluous p2p disconnections:
- check that votes are actually not older than VOTE_LIFETIME before
relaying.
- Fix off-by-one error in incoming vote handling (it was rejecting on >=
maximum age instead of > maximum age).
- Add a 5-block buffer to the incoming vote tolerance handling: don't
accept a vote that is too new or too old, but also don't trigger a
disconnection if it is within 5 blocks of where it would be
acceptable so that relays from slightly out of sync peers don't
trigger p2p disconnections.
* Add deregistration of checkpoints by checking how many votes are missed
Move uptime proofs and add checkpoint counts in the service_node_list
because we typically prune uptime proofs by time, but it seems we want
to switch to a model where we persist proof data until the node expires
otherwise currently we would prune uptime entries and potentially our
checkpoint vote counts which would cause premature deregistration as the
expected vote counts start mismatching with the number of received
votes.
* Revise deregistration
* Fix test breakages
* uint16_t for port, remove debug false, min votes to 2 in integration mode
* Fix integration build
* Removes sn-public-ip and combines into --service-node
* Only rename the argument, don't combine args for clarity
* Update help message to use --service-node-public-ip
The vast majority of the time `is_active()` is what we actually want.
Most of these are just a name change, but there is also one important
fix here to the next-winner list which needs to use active, not
fully-funded.
epee's is_ip_local is missing two ranges that are commonly found:
link-local auto-config addresses (169.254.0.0/16) that are sometimes
used as a fallback when DHCP isn't present; and the carrier-grade NAT
range (100.64.0.0/10) reserved for carriers who impose NAT on their
customers.
There are also other ranges that aren't exactly "local" but aren't
public either: 0.0.0.0/8 isn't a valid destination address; and
224.0.0.0/3 (includes includes both the 224/4 multicast range, and the
reserved (but will most likely never be used) 240.0.0.0/4 range).
These are now added to a new `is_ip_public` function that returns true
if it's not local or loopback, and not one of these special ranges.
This also simplifies some convoluted netmask logic. (The simplification
would look better except that epee took the extremely bizarre and wrong
decision to store IPv4 addresses in little-endian order).
* core: do not commit half constructed batch db txn
* Add defer macro
* Revert dumb extra copy/move change
* Fix pop_blocks not calling hooks, fix BaseTestDB missing prototypes
* Merge ServiceNodeCheckpointing5 branch, syncing and integration fixes
* Update tests to compile with relaxed-registration changes
* Get back to feature parity pre-relaxed registration changes
* Remove debug changes noticed in code review and some small bugs
The replaces the deregistration mechanism with a new state change
mechanism (beginning at the v12 fork) which can change a service node's
network status via three potential values (and is extensible in the
future to handle more):
- deregistered -- this is the same as the existing deregistration; the
SN is instantly removed from the SN list.
- decommissioned -- this is a sort of temporary deregistration: your SN
remains in the service node list, but is removed from the rewards list
and from any network duties.
- recommissioned -- this tx is sent by a quorum if they observe a
decommissioned SN sending uptime proofs again. Upon reception, the SN
is reactivated and put on the end of the reward list.
Since this is broadening the quorum use, this also renames the relevant
quorum to a "obligations" quorum (since it validates SN obligations),
while the transactions are "state_change" transactions (since they
change the state of a registered SN).
The new parameters added to service_node_rules.h control how this works:
// Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks)
// towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up
// CREDIT_PER_DAY for each day the service node remains active up to a maximum of
// DECOMMISSION_MAX_CREDIT.
//
// If a service node stops sending uptime proofs, a quorum will consider whether the service node
// has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration,
// it instead submits a decommission. This removes the service node from the list of active
// service nodes both for rewards and for any active network duties. If the service node comes
// back online (i.e. starts sending the required performance proofs again) before the credits run
// out then a quorum will reinstate the service node using a recommission transaction, which adds
// the service node back to the bottom of the service node reward list, and resets its accumulated
// credits to 0. If it does not come back online within the required number of blocks (i.e. the
// accumulated credit at the point of decommissioning) then a quorum will send a permanent
// deregistration transaction to the network, starting a 30-day deregistration count down.
This commit currently includes values (which are not necessarily
finalized):
- 8 hours (240 blocks) of credit required for activation of a
decommission (rather than a deregister)
- 0 initial credits at registration
- a maximum of 24 hours (720 blocks) of credits
- credits accumulate at a rate that you hit 24 hours of credits after 30
days of operation.
Miscellaneous other details of this PR:
- a new TX extra tag is used for the state change (including
deregistrations). The old extra tag has no version or type tag, so
couldn't be reused. The data in the new tag is slightly more
efficiently packed than the old deregistration transaction, so it gets
used for deregistrations (starting at the v12 fork) as well.
- Correct validator/worker selection required generalizing the shuffle
function to be able to shuffle just part of a vector. This lets us
stick any down service nodes at the end of the potential list, then
select validators by only shuffling the part of the index vector that
contains active service indices. Once the validators are selected, the
remainder of the list (this time including decommissioned SN indices) is
shuffled to select quorum workers to check, thus allowing decommisioned
nodes to be randomly included in the nodes to check without being
selected as a validator.
- Swarm recalculation was not quite right: swarms were recalculated on
SN registrations, even if those registrations were include shared node
registrations, but *not* recalculated on stakes. Starting with the
upgrade this behaviour is fixed (swarms aren't actually used currently
and aren't consensus-relevant so recalculating early won't hurt
anything).
- Details on decomm/dereg are added to RPC info and print_sn/print_sn_status
- Slightly improves the % of reward output in the print_sn output by
rounding it to two digits, and reserves space in the output string to
avoid excessive reallocations.
- Adds various debugging at higher debug levels to quorum voting (into
all of voting itself, vote transmission, and vote reception).
- Reset service node list internal data structure version to 0. The SN
list has to be rescanned anyway at upgrade (its size has changed), so we
might as well reset the version and remove the version-dependent
serialization code. (Note that the affected code here is for SN states
in lmdb storage, not for SN-to-SN communication serialization).
This converts the transaction type and version to scoped enum, giving
type safety and making the tx type assignment less error prone because
there is no implicit conversion or comparison with raw integers that has
to be worried about.
This ends up converting any use of `cryptonote::transaction::type_xyz`
to `cryptonote::transaction::txtype::xyz`. For version, names like
`transaction::version_v4` become `cryptonote::txversion::v4_tx_types`.
This also allows/includes various other simplifications related to or
enabled by this change:
- handle `is_deregister` dynamically in serialization code (setting
`type::standard` or `type::deregister` rather than using a
version-determined union)
- `get_type()` is no longer needed with the above change: it is now
much simpler to directly access `type` which will always have the
correct value (even for v2 or v3 transaction types). And though there
was an assertion on the enum value, `get_type()` was being used only
sporadically: many places accessed `.type` directly.
- the old unscoped enum didn't have a type but was assumed castable
to/from `uint16_t`, which technically meant there was potential
undefined behaviour when deserializing any type values >= 8.
- tx type range checks weren't being done in all serialization paths;
they are now. Because `get_type()` was not used everywhere (lots of
places simply accessed `.type` directory) these might not have been
caught.
- `set_type()` is not needed; it was only being used in a single place
(wallet2.cpp) and only for v4 txes, so the version protection code was
never doing anything.
- added a std::ostream << operator for the enum types so that they can be
output with `<< tx_type <<` rather than needing to wrap it in
`type_to_string(tx_type)` everywhere. For the versions, you get the
annotated version string (e.g. 4_tx_types) rather than just the number
4.
* Initial updates to allow syncing of checkpoints in protocol_handler
* Handle checkpoints in prepare_handle_incoming_blocks
* Debug changes for testing, cancel changes later
* Add checkpoint pruning code
* Reduce DB checkpoint accesses, sync votes for up to 60 blocks
* Remove duplicate lifetime variable
* Move parsing checkpoints to above early return for consistency
* Various integration test fixes
- Don't try participate in quorums at startup if you are not a service node
- Add lock for when potentially writing to the DB
- Pre-existing batch can be open whilst updating checkpoint so can't use
DB guards
- Temporarily emit the ban message using msgwriter instead of logs
* Integration mode bug fixes
- Always deserialize empty quorums so get_testing_quorum only fails when
an actual critical error has occurred
- Cache the last culled height so we don't needlessly query the DB over
the same heights trying to delete already-deleted checkpoints
- Submit votes when new blocks arrive for more reliable vote
transportation
* Undo debug changes for testing
* Update incorrect DB message and stale comment
* Incorporate service node ip address into uptime proofs; expose them using rpc
* Check that storage server port is specified in service-node mode
* Remove problematic const, rename argument name for storage port, update comments
* Validate ip address when receive uptime proof
* Better argument names and descriptions
* Initial updates to allow syncing of checkpoints in protocol_handler
* Handle checkpoints in prepare_handle_incoming_blocks
* Parse checkpoints sent by peer
* Fix rebase to dev referencing no longer valid argument
* Unify checkpointing and uptime quorums
* Begin making checkpoints cull old votes/checkpoints
* Begin rehaul of service node code out of core, to assist checkpoints
* Begin overhaul of votes to move resposibility into quorum_cop
* Update testing suite to work with the new system
* Remove vote culling from checkpoints and into voting_pool
* Fix bugs making integration deregistration fail
* Votes don't always specify an index in the validators
* Update tests for validator index member change
* Rename deregister to voting, fix subtle hashing bug
Update the deregister hash derivation to use uint32_t as originally set
not uint64_t otherwise this affects the result and produces different
results.
* Remove un-needed nettype from vote pool
* PR review, use <algorithms>
* Rename uptime_deregister/uptime quorums to just deregister quorums
* Remove unused add_deregister_vote, move side effect out of macro
* Beging adding functions to recalculate difficulty
* Add command line args to utility for recalculating difficulty
* Exception safety for recalculating difficulty
* Update help text for recalc difficulty
* Add recalc flag on the daemon
* Make context be const, signify intent for var++ to 1
* Add functions for storing checkpoints to the DB
* Allocate the DB entry on the stack instead of heap
* Add virtual overrides for new checkpoint functions
* Clean up for pull request, simplify some logic
* Revise API to include height in checkpoint header
* Move log to top of function even if early exit
* Begin moving checkpoints to db
* Allow storing of checkpoints to DB
* Cleanup for code reviewer, fix unit tests
* Fix tests, fix casting issues
* Don't use DUPSORT, use height->checkpoint mapping in DB
* Remove if 0 disabling checkpoint vote, we already check HF12
* Fix unit test infinite loop
* Update db schemas to match blk_checkpoint_header
* Code review
* Collect checkpoint votes and signatures
* Copy deregister vote and adapt for checkpoint vote
Monkey see, monkey do
Write boilerplate code for receiving and relaying votes
* Make current checkpoints service node aware
Fix not saving checkpoint hash for normal checkpoints
Simplify some of the checkpointing API
* Verify blocks against service node checkpoints
Remove checkpoint copy, make checkpoint at vote threshold
Fix null ptr dereferences, add debug print checkpoints
* Add simple node checkpointing w/no conflict resolution
* Use endl to flush output
* Add hash to vote
* Revise checkpoint rules
* Don't store checkpoints in a linked list
* Commit checkpoints on super majority of votes received
* Add locks since checkpoints is accessed in network thread
* Handle checkpoint vote conflicts better
* Comment out early exiting voting process for simplicity
* Clean up for code review
* Fix whitespacing
* More cleanup for review
* Remove debug changes, fix using upper_bound incorrectly, return points by value
* find_if uses == not < as the predicate
* Fix merge issues from rebasing multiple quorum types
This changed the stored proof info to a struct storing both the
timestamp and the major/minor/patch versions that were included in the
last uptime proof.
The version gets exposed as a "service_node_version" 3-int array in the
get_service_nodes RPC call.
This commit also includes a minor non-critical fix to the uptime proof
pruning as part of the changes: deleting while iterating isn't
guaranteed to iterate through all elements before C++14.
The original intent of one false positive a week on average
was not met, since what we really want is not the probability
of having N blocks in T seconds, but either N blocks of fewer
in T seconds, or N blocks or more in T seconds.
Some of this could be cached since it calculates the same fairly
complex floating point values, but it seems pretty fast already.
Current 2.0.x nodes will only accept a v10 network uptime proof if the
proof's snode_version_major is exactly equal to 2 (rather than >= 2), so
the proofs generated by a 3.0.x node won't be accepted by 2.0.x nodes.
This commit fakes the snode version put into an uptime proof prior to
the to the v11 fork to a fictitious 2.3.x version instead of the actual
3.0.x to keep v2 nodes happy with the proof.
(The same issue isn't present for future upgrade: the code in 3.0
properly recognizes >= 3 as valid for sending v11 network proofs).
The 10 minute one will never trigger for 0 blocks, as it's still
fairly likely to happen even without the actual hash rate changing
much, so we add a 20 minute window, where it will (for 0 blocks)
and a one hour window.
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
The 10 minute one will never trigger for 0 blocks, as it's still
fairly likely to happen even without the actual hash rate changing
much, so we add a 20 minute window, where it will (for 0 blocks)
and a one hour window.
No need to do a round-trip just to call set relayed on votes. Also makes
it more robust by actually checking that we were able to relay the vote
before declaring it as relayed.
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
This will trigger if a reorg is seen. This may be used to do things
like stop automated withdrawals on large reorgs.
%s is replaced by the height at the split point
%h is replaced by the height of the new chain
%n is replaced by the number of new blocks after the reorg
b6534c40 ringct: remove unused senderPk from ecdhTuple (moneromooo-monero)
7d375981 ringct: the commitment mask is now deterministic (moneromooo-monero)
99d946e6 ringct: encode 8 byte amount, saving 24 bytes per output (moneromooo-monero)
cdc3ccec ringct: save 3 bytes on bulletproof size (moneromooo-monero)
f931e16c add a bulletproof version, new bulletproof type, and rct config (moneromooo-monero)
* Remove dead branches in hot-path check_tx_inputs
Also renames #define for mixins to better match naming convention
* Shuffle around some more code into common branches
* Fix min/max tx version rules, since there 1 tx v2 on v9 fork
* First draft infinite staking implementation
* Actually generate the right key image and expire appropriately
* Add framework to lock key images after expiry
* Return locked key images for nodes, add request unlock option
* Introduce transaction types for key image unlock
* Update validation steps to accept tx types, key_image_unlock
* Add mapping for lockable key images to amounts
* Change inconsistent naming scheme of contributors
* Create key image unlock transaction type and process it
* Update tx params to allow v4 types and as a result construct_tx*
* Fix some serialisation issues not sending all the information
* Fix dupe tx extra tag causing incorrect deserialisation
* Add warning comments
* Fix key image unlocks parsing error
* Simplify key image proof checks
* Fix rebase errors
* Correctly calculate the key image unlock times
* Blacklist key image on deregistration
* Serialise key image blacklist
* Rollback blacklisted key images
* Fix expiry logic error
* Disallow requesting stake unlock if already unlocked client side
* Add double spend checks for key image unlocks
* Rename get_staking_requirement_lock_blocks
To staking_initial_num_lock_blocks
* Begin modifying output selection to not use locked outputs
* Modify output selection to avoid locked/blacklisted key images
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add additional test, fix assert
* Remove debug code in wallet
* Fix merge dev problem
* create get_service_node_list rpc call
currently does nothing, just a shell (that compiles)
* implement get_all_service_node_keys rpc call
* change get_all_service_node_keys rpc to use json rpc
also change the result to be vector of hex strings rather than binary keys
* Make nodes be plural, add hex to base32z for keys
Base32z for Lokinet internal usage
* Add option to return fully funded service nodes only
* Add nullptr check for conversion
* Add assert for incorrect usage of to base32z
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
aee7a4e3 wallet_rpc_server: do not use RPC data if the call failed (moneromooo-monero)
1a0733e5 windows_service: fix memory leak (moneromooo-monero)
0dac3c64 unit_tests: do not rethrow a copy of an exception (moneromooo-monero)
5d9915ab cryptonote: fix get_unit for non default settings (moneromooo-monero)
d4f50cb1 remove some unused code (moneromooo-monero)
61163971 a few minor (but easy) performance tweaks (moneromooo-monero)
30023074 tests: slow_memmem now returns size_t (moneromooo-monero)
This avoids the miner erroring out trying to submit blocks
to a core that's already shut down (and avoids pegging
the CPU while we're busy shutting down).
5808530f blockchain: remove unused output_scan_worker parameter (moneromooo-monero)
1426209a blockchain: don't run threads if we have just one function to run (moneromooo-monero)
6f7a5fd4 db_lmdb: slight speedup getting array data from the blockchain (moneromooo-monero)
99fbe100 db_lmdb: save some string copies for readonly db keys/values (moneromooo-monero)
bf31447e tx_pool: speed up take_tx for transactions from blocks (moneromooo-monero)
4f005a77 tx_pool: remove unnecessary get_transaction_hash (moneromooo-monero)
593ef598 perf_timer: call reserve on new timer array (moneromooo-monero)
6ecc99ad core: avoid unnecessary tx/blob conversions (moneromooo-monero)
00cc1a16 unit_tests: notify test special case for the usual weirdo (moneromooo-monero)
To help protect one's privacy from traffic volume analysis
for people using Tor or I2P. This will really fly once we
relay txes on a timer rather than on demand, though.
Off by default for now since it's wasteful and doesn't bring
anything until I2P's in.
This reverts commit 0349e183ca.
Revert "Merge pull request #197 from Doy-lee/BlockchainPopBlocksCommand"
This reverts commit 0796630ffa, reversing
changes made to b73aa8ee1c.
* core: submit uptime proof immediately after registering
* Increase visibility of autostaking prompts
* quorum_cop: changed uptime proof prune timeout to 2 hours 10 minutes
* cleanup: removed scope limiting block
* check_tx_inputs: fix deregister double spend test to include deregisters from other heights
* config: new testnet network id, genesis tx, and version bump
* wallet2: fix testnet wallet blockheight approximation
* Fix change in address format in RPC which broke parsing and pooling contributors (#184)
* Fix service node endpoints for RPC to also use stdout (#185)
* fixed some further rct core tests (#180)
* Fix service node state by calling detached hooks on failure to switch to alt chain (#188)
* fixed block verification core tests (#186)
* fixed block verification core tests
* core tests: removed gen_block_miner_tx_out_is_small which is only relevant to hardfork version 1
* Don't consider expired deregistrations when filling block template
* Add unit tests for getting staking requirement (#191)
* First service node test (#190)
* core_tests: added service node tests
* core_tests: check balance after registration tx
* Fix underflow for popping rollback events (#189)
* Move deregistration age check into check_tx_inputs
* Zero initialise rct_signatures member txnFee is a uint64_t and has uninit values
* Enforce that deregisters must be 0 fee since we skip checks
* Add unit tests for vote validation (#193)
* Add unit tests for deregistration validation (#194)
* Mainnet checkpoint 86535, testnet 3591, 4166
* Bump version number
* Add print_sr for getting staking requirement (#198)
* Misc bugfixes (#203)
* removed unnecessary cast to double during txfee+coinbase calculation
* simplewallet: increased autostaking interval from 2 minutes to 40
* Fix casting issues from uint to int (#204)
* core_tests: check service node registration and expiration (#195)
* core_tests: check service node registration and deregistration
* core_tests for service nodes:
- include service nodes rewards when calculating account's balance
- check that service nodes rewards have been received
* fixed namespace error; reduced the scope of staking requirement constants
* On blockchain inc/dec mark deregisters relayble based on age (#201)
* Service nodes restore only 1 rollback bug (#206)
* Fix restore 1 rollback event, ensure prevent rollback is always added
* Remove adding prevent_rollback event at init
It gets called in on block added generic anyway.
* Log db exception, fix relation operators for vote/deregister lifetime (#207)
* Filter relayable deregisters w/ check_tx_inputs instead of blockchain callbacks
* Bump version to 0.3.7-beta
* fix build with GCC 8.1.0 (#211)
* Add temp hardfork rule in testnet for deregister lifetimes (#210)
* Update testnet, remove testnet forks, remove checkpoints, update blockheight estimate (#212)
* Don't ban peers for a bad vote, just drop their connection (#213)
* Update to version 0.3.0 release candidate (#215)
This reverts commit c4988f5a1f.
This un-reverts the changes to store the service node info in the
database. This could easily just be its own commit instead of a
revert, but doing a revert is amusing (and easier in this case).
* Add get_service_node_list_state command for analytics
* Update service_node_list_state to search particular pubkey
* Service node list state sorts display results by longest waiting
* Fix up leftover todos/unused data structures
* Change get_service_node_list_state to print_sn