This json serialization layer was only used in the old Monero ZMQ
interface, which no longer exists, and so this is just dead code.
On top of that, it doesn't work properly for serializing CLSAG
transactions, so just delete it.
Converts all use of boost::filesystem to std::filesystem.
For macos and potentially other exotic systems where std::filesystem
isn't available, we use ghc::filesystem instead (which is a drop-in
replacement for std::filesystem, unlike boost::filesystem).
This also greatly changes how we handle filenames internally by holding
them in filesystem::path objects as soon as possible (using
fs::u8path()), rather than strings, which avoids a ton of issues around
unicode filenames. As a result this lets us drop the boost::locale
dependency on Windows along with a bunch of messy Windows ifdef code,
and avoids the need for doing gross boost locale codecvt calls.
When targetting macos <10.14 macos won't allow use of anything from
C++17 that throws, such as:
- std::get on a variant
- std::visit
- std::optional::value()
- std::any_cast
This avoids all of these.
For std::get, we either replace with std::get_if (where appropriate), or
else use a `var::get` implementation of std::get added to lokimq (also
updated here). (This `var` namespace is just an `std` alias everywhere
*except* old target macos).
For std::visit, likewise lokimq adds an var::visit implementation for
old macos that we use.
std::optional::value() uses weren't useful anyway as everywhere it calls
them we've already checked that the option has a value, in which case we
can use `*opt` (which doesn't check for contents and throw).
std::any just has to be avoided as far as I can tell, but the one place
we used it is only ever a block, so I just replaced it with a `const
block*`.
wallet2 fails to build a tx using a public node if it needs to request
more than 5000 because public nodes return an error if more than 5000
txes are requested.
This fixes the wallet to make multiple requests of 5000 each when needed
so that it won't hit the remote's error condition. (This is a bit
better than just upping the limit because other RPC requests get a
chance to run).
This changes the `get_quorum_state` RPC endpoint in two ways:
- If requesting a pulse quorum for a height that includes the current
height (i.e. top block + 1), this returns the current height, round-0
pulse quorum.
- When requesting the latest quorum state (i.e. the no-argument request)
you now get back the latest available quorum of each requested type,
instead of the quorum for the latest available block. Thus requesting
all types will now give you:
- the current top-of-the-chain round-0 pulse quorum
- the top block obligations quorum (no change)
- the top divisible-by-4 block for checkpoint quorums
- the top divisible-by-5 block for blink quorums
Previously you would just get whatever quorums existing for the top
height, which often meant just the one-old pulse quorum + top block
obligations quorum, only only checkpoint 25% of the time and blink 20%
of the time.
- Also updated the RPC comments for GET_QUORUM_STATE, both to reflect
the above and to update some parts which were a bit stale.
- Sometimes the wallet would query TX's before they are blink approved
and get cached into the wallet as a normal TX preventing external
services from using 0-confirmation on transactions.
We make the wallet only conduct long-polling for blinks in order to
notice the transaction after blink. The downside here is that normal
mempool transacations won't appear until they are placed into a block.
With Pulse this is mitigated with reliable within sub-seconds of
a target 2 min block time.
Later on we restrict this, but are leaking it to non-admin users if we
hit this case.
(The caller is meant to check .untrusted to tell if the info came from a
bootstrap node).
Fixes a couple of issues that caused problems when making requests to
GET_SERVICE_NODES:
- Internal requests making an rpc request for `GET_SERVICE_NODES::req{}`
weren't ending up specifying `"all":true` in requested_fields *or* any
other fields, so they requested no fields and got no fields in
response.
That, however, exposed another problem:
- the serialized value *loading* of service node entries shouldn't be
looking at requested fields at all: it should just deserialize
whatever is incoming. This was resulting in a parsed incoming service
node RPC request ending up with all-empty data even when the data
*was* there, which consequently broke the get-all-service-nodes call
in the wallet, making the wallet think it had no locked service nodes.
The fix for issue 1 is to ignore requested_fields when `!is_store`.
The fix for issue 2 is to make requested_fields a std::optional and
handle the null-mean-all logic in the RPC call itself rather than in the
serialization layer.
Checking clocks doesn't work when syncing the chain. Syncing the chain
in retrospect will always perceive the chain was needing a miner
difficulty instead of Pulse fixed difficulty.
This is correctly done on the rescan because it checks if the block is
actually a Pulse or miner generated block.
Also remove the difficulty cookie/cache to keep difficulty logic
explicit. In the majority of cases after HF16 the hot path will always
hit the cheap branch (and return PULSE_FIXED_DIFFICULTY) instead of
pulling timestamps for difficulty.
name and value were being set to raw binary in the tx_extra for LNS
transactions; they should be hex encoded.
Also rename `name` to `name_hash` here to match what we name it in LNS
RPC calls.
- Allow retrieving multiple blocks headers by height in the same request
(matching the API already available to do that for by-hash lookups).
- make pow_hash an optional so that it gets omitted entirely if not
requested
- Likewise for block header responses when using the array arguments
instead of the single value argument.
Currently where we need to look up a block by height we do:
1. get block hash for the given height
2. look up block by hash
This hits the lmdb layer and does:
3. look up height from hash in hashes-to-height table
4. look up block from height in blocks table
which is pointless. This commit adds a `get_block_by_height()` that
avoids the extra runaround, and converts code doing height lookups to
use the new method.
Payment IDs are hidden inside the nonce field but it's gross to expect
the RPC consumer to know about the magic first nonce byte flagging the
nonce as a payment id.
This replaces the way to handle update the remove the need to use the
registration height and prev_txid fields: instead we simply store a new
row copied from the old row with changes applied as needed.
Selecting values is then a matter of just selecting the row with the
highest update_height.
The registration_height isn't needed anymore, either, since we now have
an expiration_height (i.e. we don't to know when it was registered); now
we can just use the update_height.
txid is still present and effectively still serves as a nonce to prevent
duplicate/conflicting updates: the new TX still needs to reference the
previous txid, but we don't need to also store the prev_txid because we
don't need to walk back the chain anymore.
Rollbacks then become *significantly* easier: we just delete anything
with an update_height >= the new blockchain height: the new
"highest-update_height" row will now have the proper values.
Not included in this commit but still to come is pruning the database to
remove heights that are no longer needed (i.e. have been superceded by a
more recent update row more than 5000 blocks ago).
Revamps how .loki LNS registrations work:
- Enable lokinet registrations beginning at HF16.
- rework renewal so that you can renew at any time and it simply adds to the end of the current
expiry. Previously there was only a window in which you could renew.
- Renewals are a new type of LNS transaction, distinct from buys and updates. (Internally it is an
update with no fields, which cannot be produced in the existing code).
- Add optional "type=" parameter to lns commands. Commands default to trying to auto-detect (i.e.
if the name ends with .loki it is lokinet), but the type allows you to be explicit *and* allows
select non-default registration lengths for lokinet buys/renewals.
- change .loki naming requirements: we now require <= 32 chars if it doesn't contain a -, and 63 if
it does. We also reserve names starting "??--" for any ?? other than xn-- (punycode), as this is
a DNS restriction. "loki.loki" and "snode.loki" are also now reserved (just in case someone
sticks .loki as a DNS search domain).
- Tweak LNS registration times to consider "a year" to be 368 days worth of blocks (to allow for
leap years and some minor block time drift).
- Overhaul how LNS registrations are displayed in the cli wallet. For example:
[wallet L6QPcD]: lns_print_name_to_owners jasonv.loki jason.loki jasonz.loki
Error: jasonv.loki not found
Name: jason.loki
Type: lokinet
Value: azfoj73snr9f3neh5c6sf7rtbaeabyxhr1m4un5aydsmsrxo964o.loki
Owner: L6QPcDVp6Fu7HwtXrXjtfvWvgBPvvMQ9FiyquMWn2BBEDsk2vydwu1A3BrK2uQcCo94G7HA5xiKvpZ4CMQva6pxW2GXkCG9
Last updated height: 46
Expiration height: 75
Encrypted value: 870e42cd172a(snip)
Error: jasonz.loki not found
- Add an RPC end-point to do simple LNS resolution; you can get the same info out of
names-to-owners, but the new lns_resolve end-point is considerably simpler for doing simple
lookups (e.g. for lokinet), and allows for a simpler SQL query + processing.
Code changes:
- Rename mapping_type::lokinet_1year to mapping_type::lokinet (see next point).
- Don't store lokinet_2y, etc. in the database, but instead always store as type=2/::lokinet. The
LNS extra data can still specify different lengths, but this now just affects the
expiration_height value that we set.
- Reworked some binding code to use std::variant's and add a bind_container() to simplify passing in
a variable list of bind parameters of different types.
- Accept both base64 and hex inputs for binary LNS parameters in the RPC interface.
- This commit adds some (incomplete) expiry adjustment code, but ignore it as it all gets replaced
with the following commit to overhaul record updating.
- Updated a bunch of test suite code, mainly related to lokinet.
- Some random C++17 niceties (string_view, variant, structured binding returns) in the related code.
- Tweaked the test suite to generate a bit fewer blocks in some cases where we just need to
confirm/unlock a transfers rather than a coinbase tx.
Otherwise RPCing into the wallet and triggering a refresh doesn't really
work, i.e.
loki-wallet-cli --wallet-file file --password '' refresh
loki-wallet-cli --wallet-file file --password '' balance <-- this returns 0
- Add opt-in tx-extra parsing to `get_transactions`; this now lets you
get most tx details (including service node tx info) via the RPC.
- Add ability for RPC block header requests to include a list of tx
hashes contained in the block.
- remove long-deprecated txs_as_hex from `get_transactions`: this
essentially doubled the length of requests since txs[i].as_hex has the
exact same data. Despite this being the "new" format for several
years, wallet code was still relying on the "old" format. Since 8.x
already breaks the RPC, this seems a good time to remove it.
- Significantly cleaned up and documented how pruned transactions
results are returned in GET_TRANSACTIONS. Previously it was fairly
uncertain when you'd get a pruned or unpruned transaction. (See
comments added in core_rpc_server_commands_defs.h)
- Stop returning the transaction data hex values when you ask for
`decode_as_json`. The documentation already implied you don't get
it, but you do, and this is virtually always just wasted extra data
when you're asking for the json-decoded value.
- fix miner txes being treated in the RPC as pruned transactions (they
aren't: they just have no prunable data).
- Made a bunch of `get_transactions` fields (including virtually all of
the new parsed tx-extras fields) into std::optional's so that the keys
aren't included when the values are empty.
- Fix bug in `get_transactions` that raised an exception if *some* (but
not all) of the requested transactions were missed. Previously you
could only get all-missed, all-found, or an exception.
This caches the result of a get_coinbase_tx_sum to H-30 (if the last
request started from 0 and retrieved up to at least H-30). This makes
get_coinbase_tx_sum calls to get the full chain values massively faster
for all but the first call.
The "first call" is kind of tricky, though, because it can take a couple
minutes, during which if we get multiple calls (e.g. from the block
explorer) we might get multiple threads trying to create the cache all
at once, and *each* of those takes minutes (and chew up an admin rpc
thread). So this commit also blocks out other threads from getting a
cacheable result while the cache is being built; instead those calls get
a null optional back.
Once the cache is built, requests start returning pretty much instantly
(on my desktop system with the blockchain data cached in RAM I process
around 5k blocks per second).
Non-JSONRPC HTTP requests do the body parsing (either binary or JSON) in
the worker thread, but by then the string_view from uWebSockets is no
longer valid. Fix it by making rpc_request able to hold an owned
std::string and use that for HTTP (non-JSONRPC) requests.
- Replaces the wallet RPC classes with ones like the core RPC server
(with serialization code moved into a new .cpp file).
- Restricted commands are now carried through the RPC serialization
types (by inheriting from RESTRICTED instead of RPC_COMMAND) and
restrictions are handled in one place rather than being handled in
each of the 49 restricted endpoints. This differs a little from how
the core http server works (which has a PUBLIC base class) because
for the wallet rpc server unrestricted really doesn't mean "public",
it means something closer to view-only.
- GET_TRANSFERS_CSV is now restricted (it looks like an oversight that
it wasn't before since GET_TRANSFERS is restricted)
- GET_ADDRESS_BOOK_ENTRY is now restricted. Since restricted mode is
meant to provide something like view-only access, it doesn't make
much sense that address book entries were available.
- Use uWebSockets to provide the wallet RPC server HTTP functionality.
This version is quite a bit simpler than the core RPC version since it
doesn't support multithreaded (parallel) requests, and so we don't
have to worry about queuing jobs.
- Converted all the numeric wallet rpc error codes defines to constexprs
- Changed how endpoints get called; previous this was called:
bool on_some_endpoint(const wallet_rpc::COMMAND_RPC_SOME_ENDPOINT::request& req, wallet_rpc::COMMAND_RPC_SOME_ENDPOINT::response& res, epee::json_rpc::error& er, const connection_context *ctx = NULL)
This PR changes it similarly to how core_rpc_server's endpoints work:
wallet_rpc::SOME_ENDPOINT invoke(wallet_rpc::COMMAND_RPC_SOME_ENDPOINT::request&& req);
That is:
- the response is now returned
- the request is provided by mutable rvalue reference
- the error object is gone (just throw instead)
- the connection_context is gone (it was not used at all by any wallet
rpc endpoint).
- Consolidated most of the (identical) exception handling to the RPC
method invocation point rather than needing to repeat it in each
individual endpoint. This means each endpoint's `invoke` method can
now just throw (or not catch) exceptions. Some try/catches are still
there because they are converting one type of exception into another,
but the generic ones that return a generic error are gone.
- Removed all remaining epee http code.
- DRYed out some wallet rpc code.
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
This replaces the NIH epee http server which does not work all that well
with an external C++ library called uWebSockets. Fundamentally this
gives the following advantages:
- Much less code to maintain
- Just one thread for handling HTTP connections versus epee's pool of
threads
- Uses existing LokiMQ job server and existing thread pool for handling
the actual tasks; they are processed/scheduled in the same "rpc" or
"admin" queues as lokimq rpc calls. One notable benefit is that "admin"
rpc commands get their own queue (and thus cannot be delayed by long rpc
commands). Currently the lokimq threads and the http rpc thread pool
and the p2p thread pool and the job queue thread pool and the dns lookup
thread pool and... are *all* different thread pools; this is a step
towards consolidating them.
- Very little mutex contention (which has been a major problem with epee
RPC in the past): there is one mutex (inside uWebSockets) for putting
responses back into the thread managing the connection; everything
internally gets handled through (lock-free) lokimq inproc sockets.
- Faster RPC performance on average, and much better worst case
performance. Epee's http interface seems to have some race condition
that ocassionally stalls a request (even a very simple one) for a dozen
or more seconds for no good reason.
- Long polling gets redone here to no longer need threads; instead we
just store the request and respond when the thread pool, or else in a
timer (that runs once/second) for timing out long polls.
---
The basic idea of how this works from a high level:
We launch a single thread to handle HTTP RPC requests and response data.
This uWebSockets thread is essentially running an event loop: it never
actually handles any logic; it only serves to shuttle data that arrives
in a request to some other thread, and then, at some later point, to
send some reply back to that waiting connection. Everything is
asynchronous and non-blocking here: the basic uWebSockets event loop
just operates as things arrive, passes it off immediately, and goes back
to waiting for the next thing to arrive.
The basic flow is like this:
0. uWS thread -- listens on localhost:22023
1. uWS thread -- incoming request on localhost:22023
2. uWS thread -- fires callback, which injects the task into the LokiMQ job queue
3. LMQ main loop -- schedules it as an RPC job
4. LMQ rpc thread -- Some LokiMQ thread runs it, gets the result
5. LMQ rpc thread -- Result gets queued up for the uWS thread
6. uWS thread -- takes the request and starts sending it
(asynchronously) back to the requestor.
In more detail:
uWebSockets has registered has registered handlers for non-jsonrpc
requests (legacy JSON or binary). If the port is restricted then admin
commands get mapped to a "Access denied" response handler, otherwise
public commands (and admin commands on an unrestricted port) go to the
rpc command handler.
POST requests to /json_rpc have their own handler; this is a little
different than the above because it has to parse the request before it
can determine whether it is allowed or not, but once this is done it
continues roughly the same as legacy/binary requests.
uWebSockets then listens on the given IP/port for new incoming requests,
and starts listening for requests in a thread (we own this thread).
When a request arrives, it fires the event handler for that request.
(This may happen multiple times, if the client is sending a bunch of
data in a POST request). Once we have the full request, we then queue
the job in LokiMQ, putting it in the "rpc" or "admin" command
categories. (The one practical different here is that "admin" is
configured to be allowed to start up its own thread if all other threads
are busy, while "rpc" commands are prioritized along with everything
else.) LokiMQ then schedules this, along with native LokiMQ "rpc." or
"admin." requests.
When a LMQ worker thread becomes available, the RPC command gets called
in it and runs. Whatever output it produces (or error message, if it
throws) then gets wrapped up in jsonrpc boilerplate (if necessary), and
delivered to the uWebSockets thread to be sent in reply to that request.
uWebSockets picks up the data and sends whatever it can without
blocking, then buffers whatever it couldn't send to be sent again in a
later event loop iteration once the requestor can accept more data.
(This part is outside lokid; we only have to give uWS the data and let
it worry about delivery).
---
PR specifics:
Things removed from this PR:
1. ssl settings; with this PR the HTTP RPC interface is plain-text. The
previous default generated a self-signed certificate for the server on
startup and then the client accepted any certificate. This is actually
*worse* than unencrypted because it is entirely MITM-readable and yet
might make people think that their RPC communication is encrypted, and
setting up actual certificates is difficult enough that I think most
people don't bother.
uWebSockets *does* support HTTPS, and we could glue the existing options
into it, but I'm not convinced it's worthwhile: it works much better to
put HTTPS in a front-end proxy holding the certificate that proxies
requests to the backend (which can then listen in restricted mode on
some localhost port). One reason this is better is that it is much
easier to reload and/or restart such a front-end server, while
certificate updates with lokid require a full restart. Another reason
is that you get an error page instead of a timeout if something is wrong
with the backend. Finally we also save having to generate a temporary
certificate on *every* lokid invocation.
2. HTTP Digest authentication. Digest authentication is obsolete (and
was already obsolete when it got added to Monero). HTTP-Digest was
originally an attempt to provide a password authentication mechanism
that does not leak the password in transit, but still required that the
server know the password. It only has marginal value against replay
attacks, and is made entirely obsolete by sending traffic over HTTPS
instead. No client out there supports Digest but *not* Basic auth, and
so given the limited usefulness it seems pointless to support more than
Basic auth for HTTP RPC login.
What's worse is that epee's HTTP Digest authentication is a terrible
implementation: it uses boost::spirit -- a recursive descent parser
meant for building complex language grammars -- just to parse a single
HTTP header for Digest auth. This is a big load of crap that should
never have been accepted upstream, and that we should get rid of (even
if we wanted to support Digest auth it takes less than 100 lines of code
to do it when *not* using a recursive descent parser).
Binary, compressed output encoded in JSON actually makes the result
substantially *larger* because most blocks have <32 outputs in them,
which means most of the compressed integer values have to be encoded as
6 bytes "\u00xx", which just explodes the JSON response size.
This commit forces binary/compress off for the non-binary
get_output_distribution request, since *not* compressing it is usually
going to be smaller.
In current master (and monero) the effect is reduced....... because
upstream epee produces flat out invalid JSON containing raw bytes <0x20.
For example, jq can't parse either a binary or binary+compress response:
curl -X POST http://public.loki.foundation:22023/json_rpc -d \
'{"jsonrpc":"2.0","id":"0","method":"get_output_distribution",
"params":{"to_height":0,"amounts":[0],"binary":true}}' \
| jq .
parse error: Invalid string: control characters from U+0000 through U+001F must be escaped at line 10, column 4262
curl -X POST http://public.loki.foundation:22023/json_rpc -d \
'{"jsonrpc":"2.0","id":"0","method":"get_output_distribution",
"params":{"to_height":0,"amounts":[0],"binary":true,"compress":true}}' \
| jq .
parse error: Invalid numeric literal at line 10, column 487
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
- pessimizing move in wallet2 prevents copy ellision
- various for loops were creating copies (clang-10 now warns about
this). Mostly this is because they had the type wrong when looping
through a map: the iterator type of a `map<K, V>` is `pair<const K,
V>` not `pair<K, V>`. Replaced them with C++17:
for (const auto& [key, val] : var)
which is so much nicer.
- cryptonote::core did not have a virtual destructor, but had virtual
methods (causing both a warning, and likely a crash if we ever have
something inheriting from it held in a unique_ptr<core>).
- core() constructor still had explicit even though it lost the single
argument.
- test code class had a `final` destructor but wasn't marked final. (It
also has a billion superfluous `virtual` declarations but I left them
in place because it's just test code).
common/util.h has become something of a dumping ground of random
functions. This splits them up a little by moving the filesystem bits
to common/file.h, the sha256sum functions to common/sha256sum.h, and the
(singleton) signal handler to common/signal_handler.h.
A huge amount of this is repetitive:
- `boost::get<T>(variant)` becomes `std::get<T>(variant)`
- `boost::get<T>(variant_ptr)` becomes `std::get_if<T>(variant_ptr)`
- `variant.type() == typeid(T)` becomes `std::holds_alternative<T>(variant)`
There are also some simplifications to visitors using simpler stl
visitors, or (simpler still) generic lambdas as visitors.
Also adds boost serialization serializers for std::variant and
std::optional.
This overhauls the (unpleasant) epee "portable storage" interface to be
a bit easier to work with. The diff is fairly large because of the
amount of type indirection that was being used, but at its core does
this simplifications:
- `hsection` was just a typedef for a `section*`, so just use `section*`
instead to not pretend it the value is something it isn't.
- use std::variant instead of boost::variant to store the portable
storage items.
- don't make the variant list recursive with itself since the
serialization doesn't support that anyway (i.e. it doesn't support a
list of lists).
- greatly simplified the variant visiting by using generic lambdas.
- the array traversal interface was a horrible mess. Replaced it with
a simpler custom iterator approach.
- replaced the `selector<bool>` templated class with templated function.
So for example,
epee::serialization::selector<is_store>::serialize_t_val_as_blob(...)
becomes:
epee::serialization::perform_serialize_blob<is_store>(bytes, stg, parent_section, "addr");
and similar for the other types of serializing.
Changes all boost mutexes, locks, and condition_variables to their stl
equivalents.
Changes all lock_guard/unique_lock/shared_lock to not specify the mutex
type (C++17), e.g.
std::lock_guard foo{mutex};
instead of
std::lock_guard<oh::um::what::mutex> foo{mutex};
Also changes some related boost::thread calls to std::thread, and some
related boost chrono calls to stl chrono.
boost::thread isn't changed here to std::thread because some of the
instances rely on some boost thread extensions.
- replace the gross cmake shell script tools being used to create the data file with
cmake code and a single configure_file
- change checkpoint data to string_view
- tools::sha256sum() had two different overloads that did very different
things, yet presents two overloads: one that takes a pointer+size, the
other that takes a string lvalue ref, and yet the string lvalue
reference is *entirely* different (it reads a file). Append _str and
_file suffixes to make it less dangerous.
Switch loki dev branch to C++17 compilation, and update the code with
various C++17 niceties.
- stop including the (deprecated) lokimq/string_view.h header and
instead switch everything to use std::string_view and `""sv` instead of
`""_sv`.
- std::string_view is much nicer than epee::span, so updated various
loki-specific code to use it instead.
- made epee "portable storage" serialization accept a std::string_view
instead of const lvalue std::string so that we can avoid copying.
- switched from mapbox::variant to std::variant
- use `auto [a, b] = whatever()` instead of `T1 a; T2 b; std::tie(a, b)
= whatever()` in a couple places (in the wallet code).
- switch to std::lock(...) instead of boost::lock(...) for simultaneous
lock acquisition. boost::lock() won't compile in C++17 mode when given
locks of different types.
- removed various pre-C++17 workarounds, e.g. for fold expressions,
unused argument attributes, and byte-spannable object detection.
- class template deduction means lock types no longer have to specify
the mutex, so `std::unique_lock<std::mutex> lock{mutex}` can become
`std::unique_lock lock{mutex}`. This will make switching any mutex
types (e.g. from boost to std mutexes) far easier as you just have to
update the type in the header and everything should work. This also
makes the tools::unique_lock and tools::shared_lock methods redundant
(which were a sort of poor-mans-pre-C++17 way to eliminate the
redundancy) so they are now gone and replaced with direct unique_lock or
shared_lock constructions.
- Redid the LNS validation using a string_view; instead of using raw
char pointers the code now uses a string view and chops off parts of the
view as it validates. So, for instance, it starts with "abcd.loki",
validates the ".loki" and chops the view to "abcd", then validates the
first character and chops to "bcd", validates the last and chops to
"bc", then can just check everything remaining for is-valid-middle-char.
- LNS validation gained a couple minor validation checks in the process:
- slightly tightened the requirement on lokinet addresses to require
that the last character of the mapped address is 'y' or 'o' (the
last base32z char holds only one significant bit).
- In parse_owner_to_generic_owner made sure that the owner value has
the correct size (otherwise we could up end not filling or
overfilling the pubkey buffer).
- Replaced base32z/base64/hex conversions with lokimq's versions which
have a nicer interface, are better optimized, and don't depend on epee.
Monero decided to implement this by rebuilding the blockchain class in
tests with another FakeDB. You can trivially avoid this by restructuring
get_block_longhash to not have a high level dependency on the blockchain
itself.
Having the type list defined in the core_rpc_server.cpp file but the
actual list of types in core_rpc_server_command_defs.h is sort of
awkward: you have to change two separate files to add a type. This
moves the type list to the bottom of core_rpc_server_command_defs.h,
instead, which allows cleaning up some bits of core_rpc_server.cpp that
were there to deal with it.
* fix segfault when exiting cli wallet due to running poll thread
* fix whitespace
* style
* support for amount of blacklisted stakes in wallets
* bump version number for blacklist entries with amounts
* default initialize ints in key_image_blacklist_entry
* whitespace
High-level details:
This redesigns the RPC layer to make it much easier to work with,
decouples it from an embedded HTTP server, and gets the vast majority of
the RPC serialization and dispatch code out of a very commonly included
header.
There is unfortunately rather a lot of interconnected code here that
cannot be easily separated out into separate commits. The full details
of what happens here are as follows:
Major details:
- All of the RPC code is now in a `cryptonote::rpc` namespace; this
renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK
becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME`
becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already
working in the `rpc` namespace).
- `core_rpc_server` is now completely decoupled from providing any
request protocol: it is now *just* the core RPC call handler.
- The HTTP RPC interface now lives in a new rpc/http_server.h; this code
handles listening for HTTP requests and dispatching them to
core_rpc_server, then sending the results back to the caller.
- There is similarly a rpc/lmq_server.h for LMQ RPC code; more details
on this (and other LMQ specifics) below.
- RPC implementing code now returns the response object and throws when
things go wrong which simplifies much of the rpc error handling. They
can throw anything; generic exceptions get logged and a generic
"internal error" message gets returned to the caller, but there is
also an `rpc_error` class to return an error code and message used by
some json-rpc commands.
- RPC implementing functions now overload `core_rpc_server::invoke`
following the pattern:
RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context);
This overloading makes the code vastly simpler: all instantiations are
now done with a small amount of generic instantiation code in a single
.cpp rather than needing to go to hell and back with a nest of epee
macros in a core header.
- each RPC endpoint is now defined by the RPC types themselves,
including its accessible names and permissions, in
core_rpc_server_commands_defs.h:
- every RPC structure now has a static `names()` function that returns
the names by which the end point is accessible. (The first one is
the primary, the others are for deprecated aliases).
- RPC command wrappers define their permissions and type by inheriting
from special tag classes:
- rpc::RPC_COMMAND is a basic, admin-only, JSON command, available
via JSON RPC. *All* JSON commands are now available via JSON RPC,
instead of the previous mix of some being at /foo and others at
/json_rpc. (Ones that were previously at /foo are still there for
backwards compatibility; see `rpc::LEGACY` below).
- rpc::PUBLIC specifies that the command should be available via a
restricted RPC connection.
- rpc::BINARY specifies that the command is not JSON, but rather is
accessible as /name and takes and returns values in the magic epee
binary "portable storage" (lol) data format.
- rpc::LEGACY specifies that the command should be available via the
non-json-rpc interface at `/name` for backwards compatibility (in
addition to the JSON-RPC interface).
- some epee serialization got unwrapped and de-templatized so that it
can be moved into a .cpp file with just declarations in the .h. (This
makes a *huge* difference for core_rpc_server_commands_defs.h and for
every compilation unit that includes it which previously had to
compile all the serialization code and then throw all by one copy away
at link time). This required some new macros so as to not break a ton
of places that will use the old way putting everything in the headers;
The RPC code uses this as does a few other places; there are comments
in contrib/epee/include/serialization/keyvalue_serialization.h as to
how to use it.
- Detemplatized a bunch of epee/storages code. Most of it should have
have been using templates at all (because it can only ever be called
with one type!), and now it isn't. This broke some things that didn't
properly compile because of missing headers or (in one case) a messed
up circular dependency.
- Significantly simplified a bunch of over-templatized serialization
code.
- All RPC serialization definitions is now out of
core_rpc_server_commands_defs.h and into a single .cpp file
(core_rpc_server_commands_defs.cpp).
- core RPC no longer uses the disgusting
BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design
that forced slamming tons of code into a common header that didn't
need to be there.
- epee::struct_init is gone. It was a horrible hack that instiated
multiple templates just so the coder could be so lazy and write
`some_type var;` instead of properly value initializing with
`some_type var{};`.
- Removed a bunch of useless crap from epee. In particular, forcing
extra template instantiations all over the place in order to nest
return objects inside JSON RPC values is no longer needed, as are a
bunch of stuff related to the above de-macroization of the code.
- get_all_service_nodes, get_service_nodes, and get_n_service_nodes are
now combined into a single `get_service_nodes` (with deprecated
aliases for the others), which eliminates a fair amount of
duplication. The biggest obstacle here was getting the requested
fields reference passed through: this is now done by a new ability to
stash a context in the serialization object that can be retrieved by a
sub-serialized type.
LMQ-specifics:
- The LokiMQ instance moves into `cryptonote::core` rather than being
inside cryptonote_protocol. Currently the instance is used both for
qnet and rpc calls (and so needs to be in a common place), but I also
intend future PRs to use the batching code for job processing
(replacing the current threaded job queue).
- rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue.
Unlike http_server it isn't technically running the whole LMQ stack
from here, but the parallel name with http_server seemed appropriate.
- All RPC endpoints are supported by LMQ under the same names as defined
generically, but prefixed with `rpc.` for public commands and `admin.`
for restricted ones.
- service node keys are now always available, even when not running in
`--service-node` mode: this is because we want the x25519 key for
being able to offer CURVE encryption for lmq RPC end-points, and
because it doesn't hurt to have them available all the time. In the
RPC layer this is now called "get_service_keys" (with
"get_service_node_key" as an alias) since they aren't strictly only
for service nodes. This also means code needs to check
m_service_node, and not m_service_node_keys, to tell if it is running
as a service node. (This is also easier to notice because
m_service_node_keys got renamed to `m_service_keys`).
- Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and
`sub.mempool` subscribes the connection for new block and new mempool
TX notifications. The latter can notify on just blink txes, or all
new mempool txes (but only new ones -- txes dumped from a block don't
trigger it). The client gets pushed a [`notify.block`, `height`,
`hash`] or [`notify.tx`, `txhash`, `blob`] message when something
arrives.
Minor details:
- rpc::version_t is now a {major,minor} pair. Forcing everyone to pack
and unpack a uint32_t was gross.
- Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...).
(This immediately revealed a couple of bugs in the RPC code that was
assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because
the macro allows implicit conversion to a char).
- De-templatizing useless templates in epee (i.e. a bunch of templated
types that were never invoked with different types) revealed a painful
circular dependency between epee and non-epee code for tor_address and
i2p_address. This crap is now handled in a suitably named
`net/epee_network_address_hack.cpp` hack because it really isn't
trivial to extricate this mess.
- Removed `epee/include/serialization/serialize_base.h`. Amazingly the
code somehow still all works perfectly with this previously vital
header removed.
- Removed bitrotted, unused epee "crypted_storage" and
"gzipped_inmemstorage" code.
- Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with
LOKI_DEFERs. The epee version involves quite a bit more instantiation
and is ugly as sin. Also made the `loki::defer` class invokable for
some edge cases that need calling before destruction in particular
conditions.
- Moved the systemd code around; it makes much more sense to do the
systemd started notification as in daemon.cpp as late as possible
rather than in core (when we can still have startup failures, e.g. if
the RPC layer can't start).
- Made the systemd short status string available in the get_info RPC
(and no longer require building with systemd).
- during startup, print (only) the x25519 when not in SN mode, and
continue to print all three when in SN mode.
- DRYed out some RPC implementation code (such as set_limit)
- Made wallet_rpc stop using a raw m_wallet pointer
Removes all "using namespace epee;" and "using namespace std;" from the
code and fixes up the various crappy places where unnamespaced types
were being used.
Also removes the ENDL macro (which was defined to be `std::endl`)
because it is retarded, and because even using std::endl instead of a
plain "\n" is usually a mistake (`<< std::endl` is equivalent to `<<
"\n" << std::flush`, and that explicit flush is rarely desirable).
This commit continues the complete replacement of the spaghetti code
mess that was inside daemon/ and daemonize/ which started in #1138, and
looked like a entry level Java programmer threw up inside the code base.
This greatly simplifies it, removing a whole pile of useless abstraction
layers that don't actually abstract anything, and results in
considerably simpler code. (Many of the changes here were also carried
out in #1138; this commit updates them with the merged result which
amends some things from that PR and goes further in some places).
In detail:
- the `--detach` (and related `--pidfile`) options are gone. (--detach
is still handled, but now just prints a fatal error). Detaching a
process is an archaic unix mechanism that has no place on a modern
system. If you *really* want to do it anyway, `nohup lokid &` will do
the job. (The Windows service control code, which is probably seldom
used, is kept because it seems potentially useful for Windows users).
- Many of the `t_whatever` classes in daemon/* are just deleted (mostly
done in #1138); each one was a bunch of junk code that wraps 3-4 lines
but forces an extra layer (not even a generic abstraction, just a
useless wrapper) for no good reason and made the daemon code painfully
hard to understand and work with.
- All of the remaining `t_whatever` classes in daemon/* are either
renamed to `whatever` (because prefixing every class with `t_` is
moronic).
- Some stupid related code (e.g. epee's command handler returning an
unsuitable "usage" string that has to be string modified into what we
want) was replaced with more generic, useful code.
- Replaced boost mutexes/cvs with std ones in epee command handler, and
deleted some commented out code.
- The `--public-node` option handling was terrible: it was being handled
in main, but main doesn't know anything about options, so then it
forced it through the spaghetti objects *beside* the pack of all
options that got passed along. Moved it to a more sane location
(core_rpc_server) and parse it out with some sanity.
- Changed a bunch of std::bind's to lambdas because, at least for small
lambdas (i.e. with only one-or-two pointers for captures) they will
generally be more efficient as the values can be stored in
std::function's without any memory allocations.
This was a useless feature to begin with. According to a Monero
insider, this was introduced at the time with an intention of making it
on-by-default on every monerod instance everywhere, but because that was
such an overwhelmingly stupid idea, it never happened yet all this code
(which is probably used by no one anywhere ever) remains in the code
base.
Even if the idea wasn't dumb to start with, this will also become even
more pointless with pulse, so just drop it (it is over 1000 lines of
code, not even counting the extra headers pulled in to do things like
querying CPU usage and battery status).
I started this PR to just fix the upgrades, but when investigating I
found some problematic memory leaks as well and ended up adding a bunch
of nice abstraction layers to make the code nicer.
I'd normally separate these into multiple commits, but they ended up
quite interdependent to the point where it isn't really feasible to do
so; but here's the details on the changes:
- add a `DEFAULT "register_height"` for the new "update_height" field so
that the alter table works
- reorganized the statement preparation and database migration so that
all statement preparation happens *after* migration; otherwise
statement preparation fails because some of the statements reference
columns that aren't created until the migration.
- created a `sql_compiled_statement` class that manages sqlite statement
pointers; there were several places that we were leaking them (the
main ones were okay, but we also have some dynamically constructed
ones for variable size queries).
- added a new abstraction layer around sqlite3 binding and value
extraction that makes the actual sql interaction code much less
verbose.
- removed the `nettype` argument from `sql_run_statement` as it wasn't
being used. (If something in the future needs it, the compiled
statement class has a reference to the db object, so it can be
obtained via `statement.nsdb.network_type()`)
- `bind(statement, 1, someval)` now does what
`sqlite3_bind_whatever(statement, 1, someval, ...)` did before, except
that it infers the "whatever" from the type (for everything except
blobs), and does whatever "..." needs to happen.
- `bind(statement, 1, blob_view{someval.data(), size})` can do the same
for a blob. (There is also `bind_blob(statement, 1, someval.data(),
size)` which does the same thing but is nicer in some contexts).
- `auto x = get<T>(statement, 1)` is the bind counterpart for extracting
a known type (and similarly calls the right sqlite function based on
`T`). There is also a `get(statement, 1, x);` that does the same
thing, but infers the type from whatever `x` is. `get_blob` similarly
gets a blob value, though currently this is only used in the existing
`sql_copy_blob()` wrapper.
- All the `bind` and `get` forms accept an integer enum class as the
position argument, so the code can now write just:
bind(statement, lns_db_setting_column::top_height, value);
instead of:
bind(statement, static_cast<int>(lns_db_setting_column::top_height), value);
- bind_all lets you bind all the parameters in one go (and also clears
existing bindings), letting you write things like:
bind_all(statement, 123, "my string", my_u64_int);
which previously would have needed:
sqlite3_clear_bindings(statement);
sqlite3_bind_int(statement, 1, 123);
sqlite3_bind_text(statement, 2, "my string", 10, nullptr, nullptr);
sqlite3_bind_int64(statement, 3, my_u64_int);
- Also there's a `bind_and_run` which combines this binding with a
sql_run_statement.
- put all of the new binding code in an anonymous namespace, along with
several existing `static` functions (this is equivalent, just easier
than having to write static dozens of times).
- removed a superfluous extra "OR" clause in get_mappings_by_owners. It
was producing queries such as:
... WHERE o1.address IN (?, ?, ?) OR o2.address IN (?, ?, ?) OR o2.address IN (?, ?, ?)
also simplified that part of the code down quite a bit with a
pre-allocated string for the "?, ?, ?" part.
- Simplified some some pointer/size pairs with lokimq::string_view
(which, internally, is just a pointer size pair already).
- Various methods got de-const'ed in this change. They probably
shouldn't have been const before because even though they only access
data, they still mutate internal state of the stored statements, and
making them all mutable would be a bit gross. (They did this before,
too, but since everything was pointers calling into C code I guess
they compiler just allowed it).
Only the initial purchase updated LNS record's register_height. If the
transaction was updated inbetween and we reorganized an update LNS transaction
the details in the LNS wouldn't be reverted correctly and can cause the
replayed TX's to fail.
It fails to revert correctly because when querying the LNS DB for
transactions to revert, we only have the initial registration height,
not the subsequent updates.
This PR adds that update_height field and migrates existing DB's.
Quoting from Jagerman's reason: With the std::hash specialization
already here, an unordered_map here seems an obviously better choice:
hash collisions won't be destructive (they are with std::map + hash as
keys), the code is slightly simpler (you can eliminate the hash call),
and performance is better (map is an insertion sorted container, thus
O(NlogN) on N insertions).
This isn't just an optimal data structure thing: using a map here has
a potential (if rare) data omission on collision.
- Renames generic_key->generic_owner
- Move generic_owner and generic_signature out of crypto.h because they
aren't really crypto items, rather composition of crypto primitives.
generic_owner also needs access to account_public_address, while that is
just 2 public keys, I've decided to include cryptonote_basic.h into
tx_extra.h instead of crypto.h.
- Some generic_owner helper functions were moved into
cryptonote_basic/format_utils as they need to avoid circular
dependencies between cryptonote_core/cryptonote_basic had I included
generic_owner/generic_signature into loki_name_system.h
- Utilise the normal serialize macros since tx_extra.h already includes
the serializing headers.
This allows RPC coming from the loopback interface to not have
to pay for service. This makes it possible to run an externally
accessible RPC server for payment while also having a local RPC
server that can be run unrestricted and payment free.
We still pull out the spend public key, in a future coming PR we will
take the entire wallet address as to improve the usability of LNS.
Said PR will also accept subaddresses which should work out of the box
as long as we use the correct secret key of said subaddress to generate
the required signature.
I opted for just a enum instead of pulling in mapbox::variant for
simplicity. mapbox::variant saves a few lines of code for easily
capturing the type at assignment, whereas here we have helper functions
for assigning type.
Everywhere still uses crypto::hash for the name_hash due to being the
most compact storage form except for the LNS DB which uses base64 so
that it can be indexes in SQL for speed.
But at the boundary between requesting and accessing the DB entry, the
hash gets converted into a base64 representation of the hash.
We opt for base64 as it is more compact than storing the hex
representation.
* Adds GET_SERVICE_NODE_STATUS RPC endpoint to lokid
Previously the getting the service node status was only available in the
lokid console. This commit adds the endpoint to the RPC server
* removed unnecessary variable, allow for json in request
This allows it to be used to fetch the current staking requirement by
just asking for a height of 0, while currently you'd first have to fetch
the height and then pass that in.
- constexpr functions in common/loki.h for inlining
- move hex functions out from common/loki.h to common/hex.h
- use and apply prev_txid on LNS TX's to all LNS types (for updating in the future)
- add lns burn type, for custom burn amounts
- accept and validate lokinet addresses via base32z
- return lokinet addresses in RPC LNS calls via base32z
- updated Messenger references to Session
- update documentation to note that only Session LNS entries are allowed currently
- remove raw c-string interface from LNS db
- update multi-SQL queries into single SQL queries
- remove tx estimation backlog in anticipation for 2 priorities only, blink + unimportant
b90c4bc3 rpc: error out from get_info if the proxied call errors out (moneromooo-monero)
fa16df99 make_test_signature: exit nicely on top level exception (moneromooo-monero)
054b2621 node_rpc_proxy: init some new rpc payment fields in invalidate (moneromooo-monero)
d0faae2a rpc: init a few missing client_info members (moneromooo-monero)
d56a483a rpc: do not propagate exceptions out of a dtor (moneromooo-monero)
3c849188 rpc: always set the update field in update on sucess (moneromooo-monero)
8231c7cd rpc: fix bootstrap RPC payment RPC being made in raw JSON, not JSON RPC (moneromooo-monero)
81c26589 rpc: don't auto fail RPC needing payment in bootstrap mode (moneromooo-monero)
cryptonote_protocol_handler calls `pool.get_blink(hash)` while already
holding a blink shared lock, which should have been
`pool.get_blink(hash, true)` to avoid `get_blink` trying to take its own
lock.
That double lock is undefined behaviour and can cause a deadlock on the
mutex, although it appears rare that it actually does. If it does,
however, this eventually backs up into vote relaying during the idle
loop, which then stalls the idle loop so we stop sending out uptime
proofs (since that is also in the idle loop).
A simple fix here is to add the `true` argument, but on reconsideration
this extra argument to take or not take a lock is messy and error prone,
so this commit instead removes the second argument entirely and instead
documents which call must and must not hold a lock, getting rid of the
three methods (get_blink, has_blink, and add_existing_blink) that had
the `have_lock` argument. This ends up having only a small impact on
calling code - the vast majority of callers already hold a lock, and the
few that don't are easily adjusted.
Handle errors better when long polling is disabled instead of endlessly
spamming logs.
Avoid lock contention when set_daemon is called. Instead of immediately
affecting the long polling thread (which could be engaging the mutex
until RPC timeout, meaning the program stalls for that duration), update
the address on the next iteration of the long polling thread.
Wallets handle daemons that disable long polling better by sleeping.
`tools::wallet2::rpc_long_poll_timeout` was a static member declaration
without a definition, which isn't allowed before C++17 (although can
work depending on compiler optimizations). Adding the definition in
wallet2.cpp isn't really an option (it would make core depend on the
wallet), so just move it to a constexpr static global (which is allowed
without a definition, even before C++17) in `rpc/` instead.
This is really useful for the blink test suite as it lets us trigger a
resync (which normally only runs every 60s). In particular where we
have a (test) situation like this:
A - B - C
where we want to take down B and bring it up again but want to be sure
that new things learned from A get seen right away by C: if B does a
resync with C *before* it does a resync with A then C wouldn't get the
sync updates for a full minute, while if we force B to sync then force C
to sync we can ensure quick propagation for the test suite.
`--regtest` didn't work in some edge cases, this fixes various things:
- the genesis block wasn't accepted because it needed to be v7, not
vMax
- reduce initial uptime proof delay to 5s in regtest mode
- add --regtest flag to the wallet so that it can talk to a daemon in
--regtest mode.
This also adds two new mining options, available via rpc:
- slow_mining - this avoids the RandomX initialization. It is much
slower, but for regtest with fixed difficulty of 1 that is perfectly
fine.
- `num_blocks` - instruct the miner to mine for the given number of
blocks, then stop. (This can overmine if mining with multiple
threads at a low difficulty, but that's fine).
Blink txes were not being properly passed in/out of the RPC wallet.
This adds the necessary bits both to submit a blink and to get a blink
submission status back from the daemon.
This replaces the horrible, horrible, badly misused templated
once_a_time_seconds and once_a_time_milliseconds with a `periodic_task`
that works the same way but takes parameters as constructor arguments
instead of template parameters.
It also makes various small improvements:
- uses std::chrono::steady_clock instead of ifdef'ing platform dependent
timer code.
- takes a std::chrono duration rather than a template integer and
scaling parameter.
- timers can be reset to trigger on the next invocation, and this is
thread-safe.
- timer intervals can be changed at run-time.
This all then gets used to reset the proof timer immediately upon
receiving a ping (initially or after expiring) from storage server and
lokinet so that we send proofs out faster.
If we've previously printed some alerts about not submitting proofs
because of lokinet/storage server it's nice to print a message as soon
as we get the needed ping. Also nice to see on startup for the first
one to know things are working.
This extracts uptime proof data entirely from service node states,
instead storing (some) proof data as its own first-class object in the
code and backed by the database. We now persistently store:
- timestamp
- version
- ip & ports
- ed25519 pubkey
and update it every time a proof is received. Upon restart, we load all
the proofs from the database, which means we no longer lose last proof
received times on restart, and always have the most recently received ip
and port information (rather than only having whatever it was in the
most recently stored state, which could be ~12 blocks ago). It also
means we don't need to spam the network with proofs for up to half an
hour after a restart because we now know when we last sent (and
received) our own proof before restarting.
This separation required some restructuring: most notably changing
`state_t` to be constructed with a `service_node_list` pointer, which it
needs both directly and for BlockchainDB access. Related to this is
also eliminating the separate BlockchainDB pointer stored in
service_node_list in favour of just accessing it via m_blockchain (which
now has a new `has_db()` method to detect when it has been initialized).
This adds the last lokinet ping time to the second `status` line. It
also compresses this (and storage server's) ping time format to (for
example) `4.7min` instead of `4.7 minutes ago`.
I also add a few tweaks here to the first line of the status message to
save some space for the most common cases:
- just don't show " on mainnet" at all since that's the common case, and
instead capitalize "ON TESTNET"/"ON STAGENET" to make it more obvious
that something is non-default.
- just don't show ", not mining" when not mining
- show the daemon version returned by RPC in addition to the hf version,
as "vX.Y.Z(net vAA)". You usually get a version display from an info
log message when running `lokid version`, but that can be quite
misleading as it's the local lokid binary version which is not
necessarily the same version as the running daemon.
Sample output:
SN:
Height: 174791/174791 (100.0%) ON TESTNET, net hash 455 H/s, v6.0.0(net v14), up to date, 8(out)+12(in) connections, uptime 0d 0h 4m 32s
SN: ff00062fa69397f5580ac2b3d060064d89edac625c77a58fdcd520f94ea54727 active, proof: (never), last pings: 1.6min (storage), 1.6min (lokinet)
Regular node (while mining):
Height: 174791/174791 (100.0%) ON TESTNET, mining at 94 H/s, net hash 455 H/s, v6.0.0(net v14), up to date, 8(out)+32(in) connections, uptime 0d 0h 44m 2s
- actually include the blink hashes in the core sync data
- fix cleanup to delete heights in (0, immutable] instead of [0,
immutable); we want to keep 0 because it signifies the mempool, and we
only need blocks after immutable, not immutable itself.
- fixed NOTIFY_REQUEST_GET_TXS to handle mempool txes properly (it was
only searching the chain and ignored missed_txs, but missed_txs are ones
we need to look up in the mempool)
- Add a method to tx_pool (needed for the above) to grab multiple txes
by hash (essentially a multi-tx version of `get_transaction()`), and
change get_transaction() to use it with a single value.
- Added various debugging statements
- Added a bunch of comments to each condition of the preliminary blink
data check condition.
- Don't abort blink addition on a single signature failure: if there are
enough valid signatures we should still accept it.
- Check for blink signature approval when receiving blink signatures;
it's not enough to know that all were added successfully, we also have
to ask the blink tx if it is approved (which does additional checks on
subquorum counts) once we add them all.
This adds the ability for check_fee() to also check the burn amount.
This requires passing extra info through `add_tx()` (and the various
things that call it), so I took the:
bool keeped_by_block, bool relayed, bool do_not_relay
argument triplet, moved it into a struct in tx_pool.h, then added the other fee
options there (along with some static factory functions for generating the
typical sets of option).
The majority of this commit is chasing that change through the codebase and
test suite.
This is used by blink but should also help LNS and other future burn
transactions to verify a burn amount simply when adding the transation to the
mempool. It supports a fixed burn amount, a burn amount as a multiple of the
minimum tx fee, and also allows you to increase the minimum tx fee (so that,
for example, we could require blink txes to pay miners 250% of the usual
minimum (unimportant) priority tx fee.
- Removed a useless core::add_new_tx() overload that wasn't used anywhere.
Blink-specific changes:
(I'd normally separate these into a separate commit, but they got interwoven
fairly heavily with the above change).
- changed the way blink burning is specified so that we have three knobs for
fee adjustment (fixed burn fee; base fee multiple; and required miner tx fee).
The fixed amount is currently 0, base fee is 400%, and require miner tx fee is
simply 100% (i.e. no different than a normal transaction). This is the same as
before this commit, but is changing how they are being specified in
cryptonote_config.h.
- blink tx fee, burn amount, and miner tx fee (if > 100%) now get checked
before signing a blink tx. (These fee checks don't apply to anyone else --
when propagating over the network only the miner tx fee is checked).
- Added a couple of checks for blink quorums: 1) make sure they have reached
the blink hf; 2) make sure the submitted tx version conforms to the current hf
min/max tx version.
- print blink fee information in simplewallet's `fee` output
- add "typical" fee calculations in the `fee` output:
[wallet T6SCwL (has locked stakes)]: fee
Current fee is 0.000000850 loki per byte + 0.020000000 loki per output
No backlog at priority 1
No backlog at priority 2
No backlog at priority 3
No backlog at priority 4
Current blink fee is 0.000004250 loki per byte + 0.100000000 loki per output
Estimated typical small transaction fees: 0.042125000 (unimportant), 0.210625000 (normal), 1.053125000 (elevated), 5.265625000 (priority), 0.210625000 (blink)
where "small" here is the same tx size (2500 bytes + 2 outputs) used to
estimate backlogs.
This catches any exception thrown in the inner quorumnet blink code and
sets it in the promise if it occurs, which propagates it out to
core_rpc_server to catch and deal with.
- Adds blink signature synchronization and storage through the regular
p2p network
- Adds wallet support (though this is still currently buggy and needs
additional fixes - it sees the tx when it arrives in the mempool but
isn't properly updating when the blink tx gets mined.)
There are a bunch trivial forwarding wrappers in cryptonote_core that
simply call the same method in the pool, and blink would require adding
several more. Instead of all of these existing (and new) wrappers, just
directly expose the tx_pool reference so that anything with a `core`
reference can access and call to the mempool directly.
This code was too convoluted not to fix.
- There are 5 linear searches done over 2 vectors (without any changes
done to the vectors in between).
- One of the linear searches calls out to epee to convert the same value
to a hex string for every element.
- Both a set and a map are used here with identical key contents. The
old code looks like it only conditionally adds to the map if it finds
a match in the loop, but the for loop is doing exactly the same test
as the `find_if` that guards the entire `else if` containing it, so it
will *always* match.
- Removed some error code that can never actually run (because of the
above set/map equivalence).
This adds a thread-local, pre-seeded rng at `tools::rng` (to avoid the
multiple places we are creating + seeding such an RNG currently).
This also moves the portable uniform value and generic shuffle code
there as well as neither function is specific to service nodes and this
seems a logical place for them.
This is the bulk of the work for blink. There is two pieces yet to come
which will follow shortly, which are: the p2p communication of blink
transactions (which needs to be fully synchronized, not just shared,
unlike regular mempool txes); and an implementation of fee burning.
Blink approval, multi-quorum signing, cli wallet and node support for
submission denial are all implemented here.
This overhauls and fixes various parts of the SNNetwork interface to fix
some issues (particularly around non-SN communication with SNs, which
wasn't working).
There are also a few sundry FIXME's and TODO's of other minor details
that will follow shortly under cleanup/testing/etc.
Currently we store it as various different things: 3 separate ints, 2
u16s, 3 separate u16s, and a vector of u16s. This unifies all version
values to a `std::array<uint16_t,3>`.
- LOKI_VERSION_{MAJOR,MINOR,PATCH} are now just LOKI_VERSION
- The previous LOKI_VERSION (C-string of the version) is now renamed
LOKI_VERSION_STR
A related change included here is that the restricted RPC now returns
the major version in the get_info rpc call instead of an empty string
(e.g. "5" instead of ""). There is almost certainly enough difference
in the RPC results to distinguish major versions already so this doesn't
seem like it actually leaks anything significant.