oxen-core/.gitmodules

50 lines
1.7 KiB
Plaintext
Raw Permalink Normal View History

[submodule "external/rapidjson"]
path = external/rapidjson
url = https://github.com/Tencent/rapidjson
2018-08-23 23:50:53 +02:00
[submodule "external/trezor-common"]
path = external/trezor-common
url = https://github.com/trezor/trezor-common.git
[submodule "external/randomx"]
path = external/randomx
2021-01-18 04:58:19 +01:00
url = https://github.com/oxen-io/loki-randomXL
[submodule "external/googletest"]
path = external/googletest
url = https://github.com/google/googletest.git
Replace epee http rpc server with uWebSockets This replaces the NIH epee http server which does not work all that well with an external C++ library called uWebSockets. Fundamentally this gives the following advantages: - Much less code to maintain - Just one thread for handling HTTP connections versus epee's pool of threads - Uses existing LokiMQ job server and existing thread pool for handling the actual tasks; they are processed/scheduled in the same "rpc" or "admin" queues as lokimq rpc calls. One notable benefit is that "admin" rpc commands get their own queue (and thus cannot be delayed by long rpc commands). Currently the lokimq threads and the http rpc thread pool and the p2p thread pool and the job queue thread pool and the dns lookup thread pool and... are *all* different thread pools; this is a step towards consolidating them. - Very little mutex contention (which has been a major problem with epee RPC in the past): there is one mutex (inside uWebSockets) for putting responses back into the thread managing the connection; everything internally gets handled through (lock-free) lokimq inproc sockets. - Faster RPC performance on average, and much better worst case performance. Epee's http interface seems to have some race condition that ocassionally stalls a request (even a very simple one) for a dozen or more seconds for no good reason. - Long polling gets redone here to no longer need threads; instead we just store the request and respond when the thread pool, or else in a timer (that runs once/second) for timing out long polls. --- The basic idea of how this works from a high level: We launch a single thread to handle HTTP RPC requests and response data. This uWebSockets thread is essentially running an event loop: it never actually handles any logic; it only serves to shuttle data that arrives in a request to some other thread, and then, at some later point, to send some reply back to that waiting connection. Everything is asynchronous and non-blocking here: the basic uWebSockets event loop just operates as things arrive, passes it off immediately, and goes back to waiting for the next thing to arrive. The basic flow is like this: 0. uWS thread -- listens on localhost:22023 1. uWS thread -- incoming request on localhost:22023 2. uWS thread -- fires callback, which injects the task into the LokiMQ job queue 3. LMQ main loop -- schedules it as an RPC job 4. LMQ rpc thread -- Some LokiMQ thread runs it, gets the result 5. LMQ rpc thread -- Result gets queued up for the uWS thread 6. uWS thread -- takes the request and starts sending it (asynchronously) back to the requestor. In more detail: uWebSockets has registered has registered handlers for non-jsonrpc requests (legacy JSON or binary). If the port is restricted then admin commands get mapped to a "Access denied" response handler, otherwise public commands (and admin commands on an unrestricted port) go to the rpc command handler. POST requests to /json_rpc have their own handler; this is a little different than the above because it has to parse the request before it can determine whether it is allowed or not, but once this is done it continues roughly the same as legacy/binary requests. uWebSockets then listens on the given IP/port for new incoming requests, and starts listening for requests in a thread (we own this thread). When a request arrives, it fires the event handler for that request. (This may happen multiple times, if the client is sending a bunch of data in a POST request). Once we have the full request, we then queue the job in LokiMQ, putting it in the "rpc" or "admin" command categories. (The one practical different here is that "admin" is configured to be allowed to start up its own thread if all other threads are busy, while "rpc" commands are prioritized along with everything else.) LokiMQ then schedules this, along with native LokiMQ "rpc." or "admin." requests. When a LMQ worker thread becomes available, the RPC command gets called in it and runs. Whatever output it produces (or error message, if it throws) then gets wrapped up in jsonrpc boilerplate (if necessary), and delivered to the uWebSockets thread to be sent in reply to that request. uWebSockets picks up the data and sends whatever it can without blocking, then buffers whatever it couldn't send to be sent again in a later event loop iteration once the requestor can accept more data. (This part is outside lokid; we only have to give uWS the data and let it worry about delivery). --- PR specifics: Things removed from this PR: 1. ssl settings; with this PR the HTTP RPC interface is plain-text. The previous default generated a self-signed certificate for the server on startup and then the client accepted any certificate. This is actually *worse* than unencrypted because it is entirely MITM-readable and yet might make people think that their RPC communication is encrypted, and setting up actual certificates is difficult enough that I think most people don't bother. uWebSockets *does* support HTTPS, and we could glue the existing options into it, but I'm not convinced it's worthwhile: it works much better to put HTTPS in a front-end proxy holding the certificate that proxies requests to the backend (which can then listen in restricted mode on some localhost port). One reason this is better is that it is much easier to reload and/or restart such a front-end server, while certificate updates with lokid require a full restart. Another reason is that you get an error page instead of a timeout if something is wrong with the backend. Finally we also save having to generate a temporary certificate on *every* lokid invocation. 2. HTTP Digest authentication. Digest authentication is obsolete (and was already obsolete when it got added to Monero). HTTP-Digest was originally an attempt to provide a password authentication mechanism that does not leak the password in transit, but still required that the server know the password. It only has marginal value against replay attacks, and is made entirely obsolete by sending traffic over HTTPS instead. No client out there supports Digest but *not* Basic auth, and so given the limited usefulness it seems pointless to support more than Basic auth for HTTP RPC login. What's worse is that epee's HTTP Digest authentication is a terrible implementation: it uses boost::spirit -- a recursive descent parser meant for building complex language grammars -- just to parse a single HTTP header for Digest auth. This is a big load of crap that should never have been accepted upstream, and that we should get rid of (even if we wanted to support Digest auth it takes less than 100 lines of code to do it when *not* using a recursive descent parser).
2020-06-29 01:23:06 +02:00
[submodule "external/uWebSockets"]
path = external/uWebSockets
url = https://github.com/uNetworking/uWebSockets.git
[submodule "external/libuv"]
path = external/libuv
url = https://github.com/libuv/libuv.git
Replace epee http client with curl-based client In short: epee's http client is garbage, standard violating, and unreliable. This completely removes the epee http client support and replaces it with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC requests specifically, but it is also usable directly. This replacement has a number of advantages: - requests are considerably more reliable. The epee http client code assumes that a connection will be kept alive forever, and returns a failure if a connection is ever closed. This results in some very annoying things: for example, preparing a transaction and then waiting a long tim before confirming it will usually result in an error communication with the daemon. This is just terribly behaviour: the right thing to do on a connection failure is to resubmit the request. - epee's http client is broken in lots of other ways: for example, it tries throwing SSL at the port to see if it is HTTPS, but this is protocol violating and just breaks (with a several second timeout) on anything that *isn't* epee http server (for example, when lokid is behind a proxying server). - even when it isn't doing the above, the client breaks in other ways: for example, there is a comment (replaced in this PR) in the Trezor PR code that forces a connection close after every request because epee's http client doesn't do proper keep-alive request handling. - it seems noticeably faster to me in practical use in this PR; both simple requests (for example, when running `lokid status`) and wallet<->daemon connections are faster, probably because of crappy code in epee. (I think this is also related to the throw-ssl-at-it junk above: the epee client always generates an ssl certificate during static initialization because it might need one at some point). - significantly reduces the amount of code we have to maintain. - removes all the epee ssl option code: curl can handle all of that just fine. - removes the epee socks proxy code; curl can handle that just fine. (And can do more: it also supports using HTTP/HTTPS proxies). - When a cli wallet connection fails we know show why it failed (which now is an error message from curl), which could have all sorts of reasons like hostname resolution failure, bad ssl certificate, etc. Previously you just got a useless generic error that tells you nothing. Other related changes in this PR: - Drops the check-for-update and download-update code. To the best of my knowledge these have never been supported in loki-core and so it didn't seem worth the trouble to convert them to use cpr for the requests. - Cleaned up node_rpc_proxy return values: there was an inconsistent mix of ways to return errors and how the returned strings were handled. Instead this cleans it up to return a pair<bool, val>, which (with C++17) can be transparently captured as: auto [success, val] = node.whatever(req); This drops the failure message string, but it was almost always set to something fairly useless (if we want to resurrect it we could easily change the first element to be a custom type with a bool operator for success, and a `.error` attribute containing some error string, but for the most part the current code wasn't doing much useful with the failure string). - changed local detection (for automatic trusted daemon determination) to just look for localhost, and to not try to resolve anything. Trusting non-public IPs does not work well (e.g. with lokinet where all .loki addresses resolve to a local IP). - ssl fingerprint option is removed; this isn't supported by curl (because it is essentially just duplicating what a custom cainfo bundle does) - --daemon-ssl-allow-chained is removed; it wasn't a useful option (if you don't want chaining, don't specify a cainfo chain). - --daemon-address is now a URL instead of just host:port. (If you omit the protocol, http:// is prepended). - --daemon-host and --daemon-port are now deprecated and produce a warning (in simplewallet) if used; the replacement is to use --daemon-address. - --daemon-ssl is deprecated; specify --daemon-address=https://whatever instead. - the above three are now hidden from --help - reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
[submodule "external/cpr"]
path = external/cpr
url = https://github.com/whoshuu/cpr.git
[submodule "external/ghc-filesystem"]
path = external/ghc-filesystem
url = https://github.com/gulrak/filesystem.git
[submodule "external/Catch2"]
path = external/Catch2
url = https://github.com/catchorg/Catch2
2021-10-05 02:33:24 +02:00
[submodule "external/SQLiteCpp"]
path = external/SQLiteCpp
url = https://github.com/SRombauts/SQLiteCpp
2021-08-12 18:33:05 +02:00
[submodule "external/nlohmann-json"]
path = external/nlohmann-json
url = https://github.com/nlohmann/json.git
[submodule "external/date"]
path = external/date
url = https://github.com/HowardHinnant/date.git
[submodule "external/oxen-encoding"]
path = external/oxen-encoding
url = https://github.com/oxen-io/oxen-encoding.git
[submodule "external/oxen-logging"]
path = external/oxen-logging
url = https://github.com/oxen-io/oxen-logging
[submodule "external/oxen-mq"]
path = external/oxen-mq
url = https://github.com/oxen-io/oxen-mq.git
[submodule "external/pybind11"]
path = external/pybind11
url = https://github.com/pybind/pybind11
branch = stable