- Finally removed "difficulty". It has not been used for a long time
(and even back then never changed from the value 1).
- Removed constants/handling of already-past hard forks
- Update oxen-encoding, oxen-mq, oxen-logging to latest
- Update curl to latest, and disable nghttp2 etc. to make static build
work again on macos
- Latest stable openssl
`sig_timestamp` wasn't being properly used for the signature
verification in `store`; rather `timestamp` was always used even when
both were present (and so `sig_timestamp` was effectively useless).
This fixes it.
- various msg_id calculations needed to be updated for the HF19.3
message hash change
- test_subkey_auth.py was using some old method names that no longer
work
Originally the message hash included the timestamp and expiry of the
message when building the hash.
This breaks things in a couple of ways:
- the hash isn't reproducible because the message expiry is no longer
fixed.
- libsession-util needs de-duplication of identical messages even if the
timestamps differ.
This removes the use of timestamp/expiry starting at HF19.3.
- Adds `get_expiries` endpoint to retrieve expiries
- Adds `extend=True` parameter to the expire_msgs endpoint. This is
just like `shorten=True` but in the opposite direction (i.e. it allows
you to only extend).
- Adds `unchanged` key to the result of a expire_msgs: this is a dict of
all the hashes -> expiries of all hashes that were *not* updated when
using one of the new shorten/extend options.
With upcoming config changes we have various use for batch requests to
fetch/store/etc. from multiple namespaces and the limit of 5 is likely
going to be too restrictive quite soon.
Maintaining multiple subscriptions for the same account on the same
connection for each different set of namespaces/want_data is painful, so
merge them into one.
This has some implications about allowing a single connection to renew
namespace and want_data subscriptions (even if those namespaces and/or
want_data flag aren't getting specified in renewals), but that is
relatively minor and it allows considerable simplification of
subscriptions.
The PN server potentially needs to subscribe to many addresses at once;
this allows it to do so in a single request rather than needing many
separate small requests.
This PR adds endpoints to storage server that allows someone to maintain
a oxenmq connection to one or more swarm members and receive pushed
notifications through that connection when new messages are delivered.
This is authenticated: subscribing requires a signature from the mailbox
owner signed within the past 14 days, and connections have to be
refreshed at least once/hour to keep the push notifications alive.
The immediate use for this will be for more efficient push notifications
for mobile clients using the push notification server, but this
mechanism will eventually also allow clients (over lokinet) to get
messages pushed to them rather than having to frequently poll.
This adds a new endpoint to the storage server to revoke a subkey. The
storage server will keep track of previously revoked subkeys and will
now check that a message authenticated by a subkey does not use one of
these revoked subkeys.
The nearest-swarm logic was rather broken when handling ids in the
wrapping space (i.e. > max swarm id or < min swarm id). This fixes the
issues, as well as optimizing it.
Issues:
- all the wrapped-around values trying to compute distances were off by
*2*:
- The first off-by-one was because, even though swarm ids cannot be ==
max-u64, swarm space values (from pubkeys) *can* be, so all the
logic trying to make it wrap at MAX_SWARM_ID was invalid.
- The wrapping at MAX_SWARM_ID was *also* wrong because, even ignoring
the above point, distances are % (MAX_SWARM_ID+1), not %
(MAX_SWARM_ID).
- Because of the above, a session ID that maps to 0xff..ff in swarm
space would compute the distance to swarm_id 0 as max-u64 and thus
always go to the top swarm_id, even when 0xff..fe would map to
swarm_id 0.
- Values in the space between 0 and the minimum swarm id were rounding
to the *right* side node, while everything else rounds to the left
edge.
All of this is due to bad logic and insufficient test cases that
unfortunately was not properly reviewed when originally added to storage
server.
This commit changes the behaviour. This means old and new nodes *will*
slightly disagree for a couple of edge cases:
- we *do* have a swarm 0 on mainnet, so a session id that maps to
exactly 0xff..ff will break while the network has mixed versions.
This, however, is extremely rare (~1 in 2^64), and so is unlikely to
matter in practice.
- Values in the [0, lowest_swarm_id) that are exactly at the midpoint
change to now prefer the left node (highest_swarm_id) rather than the
right (lowest_swarm_id). This makes it consistent with the other
cases. On mainnet, however, this case doesn't actually exist between
lowest_swarm_id equals 0.
This also changes the code to use a binary search, rather than linear
scan, because swarms are guaranteed to be sorted. This also
significantly simplifies the code. The linear scan was just
unoptimized.
Swarm replacement had a bug in updating IPs: it would try to *preserve*
the IP from the old data when loading new swarm data.
This was completely wrong: it meant we would refuse IP changes, which
would almost certainly lead to storage server connectivity problems when
an IP changes.
This fixes it to only preserve IP/port data *if* the new data has it as
0s (which could happen if, for example, oxend is resyncing and hasn't
received proofs yet, but we got an initial data from a bootstrap node).
The code path was also rather inefficient with multiple unnecessary
vector copies, and an oversived map allocation.
There was yet another oddity where we were returning a reference to a
dummy value, rather than returning a nullptr on failure; this changes it
to be a nullable pointer instead (which also cascaded into returning an
optional swarm where that pointer was being used).