HF16 removes the miner fee. This breaks a lot of assumptions in the
testing code which blanket mines are bunch of blocks to fund the miner
wallet which we use throughout the test.
Instead, on tests that require use of hardforks later than HF16, the
generated hard-fork table allocates at least 60 blocks prior to the
block reward changes in HF15 and 16. The 60 blocks was chosen as it's
the largest number of blocks that tests use to accumulate funds for the
miner meaning all tests should continue working with this change.
We could also opt for generating a Service Node for the default miner so
that they get funds, but this brough up additional testing suite errors.
In the interest of time, this is the least intrusive but working way to
get tests running for the upcoming hard fork.
The bulletproof tests are inherited from Monero. They do not use change
addresses, so the difference between the incoming and outgoing amount is
set as the fee.
If the fee is greater than the block reward, in blockchain.cpp when
validating blocks we derive the base reward via block_reward - fee (i.e.
subtracting the fee from the block reward gives you the base reward).
For whatever reason, the Monero test doesn't use a change address, which
is where this extra amount would be redirected. I don't have a strong
enough understanding of what this test is doing, so I've once again
adjusted the values such that the fee (remainder) will never be larger
than the block reward, so block validation will still produce a base
reward that doesn't overflow past -0.
The --regtest/fakechain mode that blink uses no longer generates enough
mining rewards to submit blink registrations under the LRC-6 change
(with HF16 mining rewards ta 0). This commit changes two things:
- fakechain now goes hf7-8-9-...-16 instead of jumping immediately from
hf7 to hf16 at height 2; this ensures that there are *some* mining
rewards (before hf16) that can be used to register SNs and thus get
additional rewards.
- the registration logic is adapted to work with the more blocks needed
under HF16 SN rewards before submitting registrations.
ConnectionID's are loki-mq's (new from quorumnet) way of replying to a
specific connection; replying by pubkey now only works for service
nodes, while sending via ConnectionID is designed to work for any
connections.
The existing code was using the pubkey, which failed to send a reply
from the blink quorum when the initiator was not itself a service node.
It often hits the 50 minute limit (it seems to require some luck and/or
ccache'ed previous compilations) to consistently finish on time, so move
it to allowed failures so it doesn't cause github ci failures.
I opted for just a enum instead of pulling in mapbox::variant for
simplicity. mapbox::variant saves a few lines of code for easily
capturing the type at assignment, whereas here we have helper functions
for assigning type.
/home/travis/build/loki-project/loki-core/src/cryptonote_basic/tx_extra.h:422:20: error: enclosing class of constexpr non-static member function ‘void cryptonote::tx_extra_loki_name_system::set_field(lns::extra_field)’ is not a literal type
constexpr void set_field (lns::extra_field bit) { fields = static_cast<lns::extra_field>(static_cast<uint8_t>(fields) | static_cast<uint8_t>(bit)); }
Register height is only updated when a mapping is renewed or when first
purchasing the mapping, i.e the register height. Before we assumed,
register_height was changed on every update, but, you are given the TXID
of the transaction already which by extension gives you the
register_height so that was also somewhat redundant.
Updates are granular and can update individual fields of a LNS TX,
meaning when we reorg- restoring a record to its proper state can
potentially require searching further back than the detach height.
The following scenario, copied from a comment in loki_name_system.cpp
-----------------------------------------------------------------------------------------------
Detach Logic: Simple Case
-----------------------------------------------------------------------------------------------
LNS Buy @ Height 100: LNS Record={field1=a1, field2=b1, field3=c1}
LNS Update @ Height 200: LNS Record={field1=a2 }
LNS Update @ Height 300: LNS Record={ field2=b2 }
LNS Update @ Height 400: LNS Record={ field3=c2}
Blockchain detaches to height 301, the target LNS record now looks like
{field1=a2, field2=b2, field3=c1}
Our current LNS record looks like
{field1=a2, field2=b2, field3=c3}
To rebuild our record, find the closest LNS Update that is earlier than the
detach height. If we run out of transactions to run back to, then the LNS
entry is just deleted.
-----------------------------------------------------------------------------------------------
Detach Logic: Advance Case
-----------------------------------------------------------------------------------------------
LNS Buy @ Height 100: LNS Record={field1=a1, field2=b1, field3=c1}
LNS Update @ Height 200: LNS Record={field1=a2 }
LNS Update @ Height 300: LNS Record={ field2=b2 }
LNS Update @ Height 400: LNS Record={ field3=c2}
LNS Update @ Height 500: LNS Record={field1=a3, field2=b3, field3=c3}
LNS Update @ Height 600: LNS Record={ field3=c4}
Blockchain detaches to height 401, the target LNS record now looks like
{field1=a2, field2=b2, field3=c2}
Our current LNS record looks like
{field1=a3, field2=b3, field3=c4}
To get all the fields back, we can't just replay the latest LNS update
transactions in reverse chronological order back to the detach height,
otherwise we miss the update to field1=a2 and field2=b2.
To rebuild our LNS record, we need to iterate back until we find all the
TX's that updated the LNS field(s) until all fields have been reverted to
a state representative of pre-detach height.
i.e. Go back to the closest LNS record to the detach height, at height 300.
Next, iterate back until all LNS fields have been updated at a point in
time before the detach height (i.e. height 200 with field=a2).
So that when adding new fields to records, updating the helper will
trigger compile errors on pre-existing record checks. There have been
several updates to mapping records that weren't being tested over the
lifetime of implementing LNS.