It seems that the `m_tinfo` can be null, sometimes, when `m_cursors ==
&m_wcursors` is true, and the upstream Monero code (which is pure macro)
doesn't touch the bool in such a case.
For some reason this started segfaulting now, only on macos, only on a
release build because of the access into `m_tinfo`.
The workaround (which is indeed a correct fix) appears to avoid the
segfault, but the segfault could retrigger if that invariant doesn't
hold (and it isn't immediately obvious why that invariant *should*
hold).
This, like pretty much all of the LMDB code, is garbage.
Though this statement seems dubious since "The Oxen Project" is no a
legal entity. Should perhaps be "Oxen Privacy Tech Foundation"? Or
alternatively we could have a statement somewhere that "The Oxen
Project" refers to the OPTF + code contributed by Oxen community members
through github, etc.
Okay enough copyright law.
Remove support for old (non-bt) proofs with the 9.2.0 snode revision
block (I'm not 100% sure on what to call this; "snode revision"? "soft
fork"? "spork"?).
Also bumps the working version to 9.2.0; this likely isn't release
ready, but allows for testing of this on testnet.
Snode revisions are a secondary version that let us put out a mandatory
update for snodes that isn't a hardfork (and so isn't mandatory for
wallets/exchanges/etc.).
The main point of this is to let us make a 9.2.0 release that includes
new mandatory minimums of future versions of storage server (2.2.0) and
lokinet (0.9.4) to bring upgrades to the network.
This slightly changes the HF7 blocks to 0 (instead of 1) because,
apparently, we weren't properly checking the HF value of the
pre-first-hf genesis block at all before. (In practice this changes
nothing because genesis blocks are v7 anyway).
This also changes (slightly) how we check for hard forks: now if we skip
some hard forks then we still want to know the height when a hard fork
triggers. For example, if the hf tables contains {7,14} then we still
need to know that the HF14 block height also is the height that
activates HF9, 10, etc.
This makes reachability testing activate at HF19. We probably want to
come back and update this before HF19, but for now we just check but
don't enforce lokinet reachability.
It works just like storage server testing.
Renames the report_peer_storage_server_status to report_peer_status, and
repurposes the code to handle both SS and lokinet.
This *doesn't* need a HF by design because the reason bit field was
deliberately designed so that we can add reason fields (older clients
will just ignore unknown bits).
When doing a lookup by owner in simplewallet (`ons_by_owner`) the wallet
was always writing "Owner: ..." for whatever owner row was requested,
*not* the actual Owner row returned.
In particular, when we get back a record where WalletX is the backup
owner and WalletA is the primary owner then when looking up ons records
by owner for WalletX it would print:
Owner: WalletX
BackupOwner: WalletX
Instead of properly showing the record owner details of:
Owner: WalletA
BackupOwner: WalletX
This fixes it.
When looking up the ONS names by owner (e.g. ons_by_owner in the cli
wallet) the oxend RPC method returns an error for any records that get
returned where the looked up owner is a backup owner, because it tries
to map (only) the primary owner to a response index and fails if it got
back a record from the database that wasn't in the list of requested
addresses.
This fixes the RPC method to properly check both owner and backup owner.
This index was being created on a column that doesn't exist, and
apparently because it was quoted (until the commit earlier in this PR),
sqlite was apparently treating it as a string literal.
Cursed AF.
Drop the old name and recreate the index.
This index was only getting created on upgraded (from v7, I think)
databases but not new ones.
Also adds a comment about what the "prune" queries actually do (because
the `>=` seems really counterintuitive to me until I figured out that
they are only meant for handling a rollback).
* Make preparing registration fail if lokinet or the storage server have not received a ping yet
* Add prepare_registration boolean default
* Add signature override for get_human_time_ago to accept an optional uint64. Add stricter comparisons to ping checks
* Remove duplicate logic
* Remove method override and assign/cast last_lokinet_ping/last_storage_server_ping
* Use auto definition and static_cast res.last_lokinet_ping
It breaks the build on the latest Big Sur Xcode version because it
accepts the option but then later produces warnings about it being
unused when trying to use it.
- Convert pkg-config library finding to use IMPORTED_TARGET rather than
the old hacky way of using multiple variables.
- Remove libunbound finding from external (it gets set up already in the
root CMakeLists.txt)
- Remove libzmq searching since we no longer directly depend on it
(except through oxenmq).
This option is incredibly misguided: exceptions are a normal part of C++
error handling that are used *as intended* in lots of places in the
code. Spewing massive amounts of output every time any exception is
thrown anywhere (even when caught!) is terrible.
More than that, we don't ever build with it enabled (for the above
reasons) so this is all just unused code.
We dropped the contrib/depends build system quite a while ago (because
it was nasty), but there are still various DEPENDS checks scattered
through cmake that are just dead code now. This removes them.
bt-encoded proofs have a bug for nodes with different legacy/ed25519
pubkeys that isn't easily solvable (it needs a fix on *both* sender and
receiver ends). That fix is in here (in uptime_proof.cpp) but that
isn't enough to solve it immediately.
This works around the issue by submitting old-style proofs if we are a
node with different legacy/ed25519 pubkeys.
The timestamp inside the proof is only for signature validation, but we
were using it in some places as the uptime proof time, but not updating
it everywhere we needed to. This fixes it by using our own timestamp
for all local timed events (e.g. when we received it, when the node is
not sending proofs, etc.) to fix the issue.