The ONS burn price was lowered in HF 18 and the testnet has
transactions that are over the adjusted price. The codebase allows
for transactions already on testnet but in process broke the test suite.
See PR #1441 for further details
A couple people have messaged me now because they tried transferring
keys and used the `restore` command on their legacy `key` file, but it
restored an ed25519 key.
This adds a red warning if attempting to restore an ed key to a filename
ending in `/key` with a note about probably wanting the restore-legacy
command instead.
PR #1433 getting merged changed the fees within HF18 on testnet, which
broke syncing/ONS rescanning because the per-merged testnet has higher
fee ONS txes on it.
This adds a hack to allow wrong fees for blocks before yesterday.
Clang warns about http_server having a non-virtual destructor; we aren't
actually doing anything that would cause problems, but the warning is
legit and a correct thing to fix.
macOS's system_clock apparently only has microsecond resolution while
steady_clock has nanosecond, so the conversion here was failing (because
time_point conversions are only implicit when converting to a more
precise type).
The important part here is removing this line:
if (swarm_to_snodes.size() == 1) return MAX_ID / 2;
because, if we end up in a case where we have only one swarm and it
*already* has that ID (e.g. create 2, which will be [MAX/2,0] then drop
0) then this returns a swarm_id that already exists, which is bad
because then we fail to insert the new swarm, a service node gets left
with an unassigned swarm id, and that then causes issues in SS because
that node thinks it is deactivated because it doesn't have a swarm id
(yet it *is* in the active nodes list, so other network members still
try to talk to it).
This moves all the responsibility of ping testing (deciding when it's
unreachable, etc.) into oxend, allowing for better reporting on SS ping
results and eliminating some edge cases that can lead to oxend and
storage server getting "stuck" thinking each is in a different state.
The reason behind the limit is that the burn amount was supposed to be
encoded using varint encoding and therefore the limit was to make sure
that once we figured out the final burn amount and put it in, we were
guaranteed not to be making the TX extra any bigger (just in case that
could end up making the overall tx get a couple bytes bigger and break
the tx size limit).
However, it never actually *used* varint encoding: instead it is encoded
as a raw, full size uint64_t value of 8 bytes regardless of the value,
so this check is not actually doing anything. (And if we changed it
to a varint we'd break the protocol, so just leave it).
It also turns out that this comment was wrong:
This value (~4398 OXEN) was chosen because it's unlikely to ever be
needed to be burned in a single transaction
Also I hear that some users really do need more than 640kB RAM. ;-)