The timestamp in the file didn't match the comment or the block height
(this doesn't break anything; the timestamp is only used for future
block arrival estimates).
Lokinet and SS are actually sending `pubkey_ed25519` for the parameter,
but oxen-core is expecting `ed25519_pubkey`. Fixes oxen-core to match
what lokinet/ss are actually sending.
This adds a new table to the batching schema to copy the accrued
balances every so often. This means that we can jump back to a
previous point without popping blocks.
The archiving is triggered in sql every 100 blocks and added to the
archive table, then pruned from the archive table at a later time to
ensure the size is kept small. Rebuilding 100 blocks is pretty
reasonable and should be less than 10s.
For longer distance pop_blocks and blockchain detached every 10k blocks
is kept in the archiving table. This takes longer to rebuild but is
better than rebuilding from scratch.
The blockchain detached function is also added to our regular blockchain
detached hooks so that it gets called every time the blockchain
detaches. Which appears to have caused some issues previously when some
of the modules would detach but batching would be stuck in an advanced
state.
This lets lokinet or SS signal to oxend that something is known to be
wrong, and so oxend should not send a proof so that, hopefully, the
problem gets noticed and fixed.
Currently we're putting the last-uptime-received data in the service
node list info, even when we have more updated info that hasn't yet been
accepted by the network.
This causes problems for lokinet, in particular, because it ignores any
SNs in the list without ed25519 pubkeys, including itself, which can
result in lokinet thinking it is deregistered for a while after the SN
becomes registered.
This updates `get_service_nodes` to always include current info (rather
than latest proof info) for itself when querying a service node.
- 10.2.0 oxend version
- keep lokinet at 0.9.9
- require storage server 2.4.0
- schedule 10.2.0 mandatory date as 14 September 00:00 UTC (10am in
Australia Eastern Time; 8pm (on the 13th) in US Eastern time.
This modifies the current behaviour from only creating the hwdev.txt
file, when a new wallet is created if content is specified as an
argument to always creating the file. This file is used by the GUI
wallet to detect if the device is a hardware device and the CLI wallet
behaviour is kept the same to keep behavious consistent between CLI and
RPC wallets.
The checking if the service node contributers are small miscalculated
atomic oxen. This brings in the correct figure for HF20 and in the
interim brings measures into the wallet and blink to prevent small
contributors from being to unlock in HF19.
This was nasty.
Remove --block-rate-notify entirely; it's nearly useless on Oxen.
block and reorg notify remain, but now use a post-block hook rather than
shoving toxic Notify crap into cryptonote_core headers.
Add hooks that are called *after* a block is successfully added to the
blockchain.
This also fixes a race condition with OMQ notify.block subscriptions:
because the notification was firing during (instead of after) block
addition, the lokinet/storage server receiving the notify would race to
fetch the block info, but that request could happen *before* the block
addition is finished, ending up with lokinet/SS sometimes getting a
stale block.
This hook *isn't* called after a block is added, but rather it is part
of the block addition process and can abort the whole thing by raising
an exception (or returning false, prior to this PR).
This is unintuitive and causes bugs if using it as a "block has been
added" hook.
bool returns suck in general, but in most cases here they are also a
pain in the ass because *each* place that returns false is also issuing
a log statement. If only there were a way to return error information
to the common caller to have the common caller handle it... oh wait,
there is!
- Instead of inheriting from a pure virtual base class, we know just use
lambdas for hook callbacks.
- The signature of hooks also changes to take an info struct rather than
a list of parameters, so that you don't have to change everything to
ignore unwanted parameters if someone adds something to a hook.
I'm not entirely sure how this happens, but I came across a node that
was failing here because the trigger didn't exist -- possibly from a
partial upgrade?
In any case, the `IF EXISTS` makes it more lenient.
The CLI wallets `print_locked_stakes` was buggy, overly verbose, and
missing pertinent information. This overhauls it.
Details are below; the quick differences:
- show the staking requirement
- make key image output optional with a +key_images flag on the command
- fix various bugs such as just showing the first contribution as the
"total" contribution
- add the wallet address of extra contributions, rather than just the
key image with no other info.
- put your own contributions first, and mark them as such
- show totals of other contributors
- sort service nodes to put unlocking nodes first; among unlocking and
non-unlocking nodes we sort by hex pubkey string.
- sort locked dereg outputs by unlock height
- considerable reformatting of how things are displayed
---
Output now looks like this:
First we have 1-2 lines of general info:
Service Node: abcdef123456...
Unlock Height: 1234567 (omitted if not unlocking)
If there are other contributors then we print a total line such as:
Total Contributions: 15000 OXEN of 15000 OXEN required
For our own contribution, when we have a single contribution, we use one of:
Your Contribution: 5000 OXEN (Key image: abcdef123...)
Your Contribution: 5000 OXEN of 15000 OXEN required (Key image: abcdef123...)
(the second one applies if we are the only contributor so far).
If we made multiple contributions then:
Your Contributions: 5000 OXEN in 2 contributions:
Your Contributions: 5000 OXEN of 15000 OXEN required in 2 contributions:
(the second one if we are the only contributor so far).
This is followed by the individual contributions:
‣ 4000.5 OXEN (Key image: abcdef123...)
‣ 999.5 OXEN (Key image: 789cba456...)
If there are other contributors then we also print:
Other contributions: 10000 OXEN from 2 contributors:
• 1234.565 OXEN (T6U7YGUcPJffbaF5p8NLC3VidwJyHSdMaGmSxTBV645v33CmLq2ZvMqBdY9AVB2z8uhbHPCZSuZbv68hE6NBXBc51Gg9MGUGr)
Key image 123456789...
• 8765.435 OXEN (T6Tpop5RZdwE39iBvoP5xpJVoMpYPUwQpef9zS2tLL8yVgbppBbtGnzZxzkSp53Coi88wbsTHiokr7k8MQU94mGF1zzERqELK)
‣ 7530 OXEN (Key image: 23456789a...)
‣ 1235.435 OXEN (Key image: 3456789ab...)
If we aren't showing key images then all the key image details get omitted.
---
Locked key images get overhauled too; it wasn't at all clear from the
output *why* these are locked (i.e. these are locked contributions of
failed SNs):
Locked Stakes due to Service Node Deregistration:
‣ 234.567 OXEN (Unlock height 1234567; Key image: abcfed999...)
‣ 5000 OXEN (Unlock height 123333; Key image: cbcfef989...)
With batching, individual blocks can have a negative coinbase emission
because the tx fee gets added to the batch rewards database and not paid
out immediately, which then results in an negative overflow to a value
close to 2^64. Thus a block with no payout and a tx fee will have an
erroneous huge positive coinbase emission when queried via
`get_coinbase_tx_sum`. For example block 1094068 queried with:
{"jsonrpc":"2.0","id":"0","method":"get_coinbase_tx_sum","params":{"height": 1094068, "count": 1}}
returns:
{
"jsonrpc": "2.0",
"id": "0",
"result": {
"burn_amount": 0,
"emission_amount": 18446744073699378616,
"fee_amount": 10173000,
"status": "OK"
}
}
This commit fixes it by making the values signed (and also serves as an
example of why unsigned integers are usually the wrong choice):
{
"jsonrpc": "2.0",
"id": "0",
"result": {
"burn_amount": 0,
"emission_amount": -10173000,
"fee_amount": 10173000,
"status": "OK"
}
}
Profiling shows noticeable CPU spend in allocating memory for this
vector (which makes sense, since we are looping through ~1700 nodes and
building a reward vector for each one). Avoid it by reusing a single
vector that gets cleared (but not reallocated more than a handful of
times).
This reduces batching CPU time in a debug build by about 12%; curiously
I didn't find a noticeable reduction in a release build.
Use intrinsic 128-bit types when supported by the compiler; the
operations on them are a bit faster than the two-uint64_t
implementations we do not, and this shaves about 7% off the batch
processing time (which makes a lot of mul128_div64 calls).
Also removes a bunch of unused, useless epee crap from int-util.h.
Converting between internal addresses and string wallet addresses is
surprisingly expensive (partly because a keccak hash is involved);
normally on the blockchain that doesn't matter because string
conversions are rare, but batching exposed some code paths that do the
conversion a lot.
This makes two optimizations to considerably speed up batching
processing time:
1. Add an address string hash for batch addresses. We don't see that
many, but we tend to see the same addresses for *every* block. By
caching it we can considerably reduce the time needed to do the
actual conversions.
2. In many places we don't really need strings at all: we can work just
with the address_infos (which are cheap). This eliminates some of
the places we use addresses and pushes address_infos further through
the code.
In total, this plus the previous optimization drop batch processing time
by this cuts batch processing CPU time by about 85%, roughly half from
this optimization and roughly half from the store-offsets optimization.
On a debug build, timing mainnet resyncing from just before HF19 to
current:
- neither optimization: 223.60s spent processing batch jobs (of 282s
total recalculating subsystems).
- just offsets: 131s spent in batch processing
- both optimizations: 33s spent in batch processing (down to 60s total
recalculating).
On a release build:
- before: 107.35s (4.73s snl; 95.55s batch)
- after: 28.27s (4.66s snl; 0.00s ons; 21.72s batch)
Batch timing was being reported as 0s, even though it is taking up the
vast majority of the time as of the HF block.
Reduce the block processing size to 500 blocks at a time rather than
1000.
Change when we print status updates to print once duration passes a 10s
threshold rather than every 10*1000 blocks.
This latter change probably also makes oxend more reliable on
low-powered servers because it also guards the systemd status update
message (which is needed to give us more watchdog time).
When the sqlite batching database is resyncing it is dependant on the
service node list state being correct at the time the block is added to
the batching database. When syncing a singular module this sometimes is
not the case as the service node list state may be ahead or behind. This
patch simply ensures that when the batching database is behind the
service node list is reset back to that point in time.