This alters the derived key hash to reverse the pubkey order when
replying so that the same key in each direction is used, i.e. for a
message from client C to snode S the key is:
H(cS || C || S)
before this commit, the would use an encryption key for return messages
of:
H(sC || S || C)
and, while the client can still decrypt that, it means the client has
*two* derived keys to worry about. With this change, server swap the
order so it puts itself *second*:
H(sC || C || S)
which will yield the same shared key as the client derived for the
original message.
Replaces the sn.onion_req_v2 OMQ endpoint with a new sn.onion_request
that takes an extendable bencoded dict (the same as used extensively in
oxen-core and lokinet), thus allowing us to pass fields such as hop
number and encryption type in the request, remaining compact (binary
data has no overhead), and allows for future additions without requiring
a new endpoint.
The new endpoint activates for SN-to-SN onion data at HF18; before then
the sn.onion_req_v2 is still used and remains backwards compatible (but
cannot be extended with encryption type or hop info).
Currently on the wire we have four fields:
p - end encrypted payload (required)
ek - the ephemeral key (required)
et - the encryption type (optional, aes-gcm if not provided)
nh - the hop number, which get incremented on each hop
Max path length is limited to 15, to allow the client to choose to
obscure it's path knowledge somewhat by using a randomized starting hop
position from `[0, 15-actual]`
This gives a compile-time failure if we don't handle a case, which is
nicer than the current run-time error log entry.
The diff looks messy but this is basically just extracting each possible
ParsedInfo option into its own method and calling it via std::visit.
(var::visit from oxenmq/variant.h *is* std::visit everywhere except
pre-10.15 macOS where std::visit doesn't work).
Client requests only every call with v2=true, and the oxenmq v1 request
endpoint isn't actually used (except for the ping hack going away after
HF18), so just remove a bunch of dead code.
Prevents possible multiple invocation errors by make sure every place we
invoke a callback function that isn't obviously at the end of a function
explicitly returns.
If one of these errors were hit processing would continue, writing
status/headers/body multiple times until we throw an exception, which
bubbles back to the exception handler which writes the body yet another
time.
- Pass payload/control as arguments (hard-coding it and needing to
recompile was gross).
- Fix 0-hop onion requests; the last layer of data encapsulation wasn't
being applied to a 0-hop request.
- Print metadata (headers, body size, etc.) to stderr, so that stdout
can be redirected or piped to `jq` to just process the body.
- Auto-detect response encoding/encryption. We can get back plaintext,
encrypted binary, or encrypted+base64 data, depending on the request
type and parameters; the code now probes it and attempts decryption to
figure out which one it is.
Redoes the oxend rpc API interface to make it slightly simpler and avoid
double-encoding (or triple-encoding!) json data.
The API now looks like:
{"method": "oxend_request", "params": {"endpoint": "get_service_nodes", "params": {"limit": 1}}}
This renames the "oxend_params" key to just "params" (because the
"oxend" bit seems redundant given the method name) and makes it optional
(because many oxend rpc endpoints, including get_service_nodes, do not
require parameters to be passed at all).
The return value is now (when also using `"json": true` in the control
parameter added in the previous commit) straight JSON:
{"status":200,"body":{"result":{"block_hash":"699e2f20bcb...
instead of:
{"status":200,"body":"{\"result\":\"{\\\"block_hash\\\":\\\"699e2f20bcb...
If not requesting json embedding in the control parameters (see previous
commit) then there will still be the outer later of json string encoding
for body, but the JSON is still embedded directly rather than being an
extra string layer:
{"status":200,"body":"{\"result\":{\"block_hash\":\"699e2f20bcb...
On error of whatever reason you get back a 400 status with the error
message in "body", for example:
{"body":"Unable to parse request: Failed to parse JSON parameters","status":400}
This adds two boolean fields to the "control" section of an onion
request (i.e. the part where the "headers" field is passed).
`"base64": false` -- If this is specified (and false) then the onion
request response will *not* be base64 encoded after encryption (i.e.
just sent back as straight encrypted binary data).
`"json": true` -- If specified and true *and* the request returns JSON
then embed the JSON directly into the request rather than putting the
stringified JSON into the body string. E.g. if this is specified then
you would get:
{"status":200,"body":{"hi":"123"}}
instead of:
{"status":200,"body":"{\"hi\":\"123\"}}
No actual parsing of the inner content is done; if the endpoint returned
invalid json then the return will contain invalid json.
The default for both fields (true and false, respectively) give back the
existing behaviour so that current onion request clients won't break.
The ghetto version was not even remotely similar to a string view, but
was actually a string iterator pair container with no ability to
actually "view" anything.
- Enforce hex rather than accepting any random 66- or 64-character
string as a pubkey
- Clean up pubkey -> integer code
- The cleanup fixes a bug where pubkey -> integer conversion was
skipping the first two bytes on testnet (and ended up in UB by reading
the null + one byte beyond the end of the string for testnet
addresses). THIS WILL BREAK EXISTING TESTNET PUBKEY->SWARM VALUES!
(but it's only testnet, so that's okay).
The protocol is currently underdesigned with no failure mechanism at
all, so there's nothing we *can* fix here. We should fix the protocol,
of course, but that's outside the scope of this PR.
SS's current testee/tester sorting is based on (nasty) sorting of a
lower-case hex string representation of the pubkey.
This adds a hack for compatibility up to HF18, then at HF18 switches to
sorting by direct binary pubkey values.
If we're skipping it we don't care about it being missing.
Without this we get a bunch of warnings whenever an unfilled awaiting
contributions node has not yet submitted proofs or is abandoned (which
is usually the case on testnet, and sometimes on mainnet).
Sending a bogus sn.onion_req with a "ping" argument was a gross hack
that was needed for a backwards mid-hf update a long time ago; this
finally replaces it to a proper endpoint (starting at HF18).
- Do the json parsing as part of the payload parsing rather than
allocating a string and then making the caller do it (there's no case
where the caller *doesn't* want to do it).
- Modernize code to use structured bindings, allowing both cleaner code
and reduction in the number of moves/copies.