Each call to zmq::poll is painfully slow when we have many open zmq
sockets, such as when we have 1800 outbound connections (i.e. connected
to every other service node, as services nodes might have sometimes and
the Session push notification server *always* has).
In testing on my local Ryzen 5950 system each time we go back to
zmq::poll incurs about 1.5ms of (mostly system) CPU time with 2000 open
outbound sockets, and so if we're being pelted with a nearly constant
stream of requests (such as happens with the Session push notification
server) we incur massive CPU costs every time we finish processing
messages and go back to wait (via zmq::poll) for more.
In testing a simple ZMQ (no OxenMQ) client/server that establishes 2000
connections to a server, and then has the server send a message back on
a random connection every 1ms, we get atrocious CPU usage: the proxy
thread spends a constant 100% CPU time. Virtually all of this is in the
poll call itself, though, so we aren't really bottlenecked by how much
can go through the proxy thread: in such a scenario the poll call uses
its CPU then returns right away, we process the queue of messages, and
return to another poll call. If we have lots of messages received in
that time, though (because messages are coming fast and the poll was
slow) then we process a lot all at once before going back to the poll,
so the main consequences here are that:
1) We use a huge amount of CPU
2) We introduce latency in a busy situation because the CPU has to make
the poll call (e.g. 1.5ms) before the next message can be processed.
3) If traffic is very bursty then the latency can manifest another
problem: in the time it takes to poll we could accumulate enough
incoming messages to overfill our internal per-category job queue,
which was happening in the SPNS.
(I also tested with 20k connections, and the poll time scaling was
linear: we still processed everything, but in larger chunks because
every poll call took about 15ms, and so we'd have about 15 messages at a
time to process with added latency of up to 15ms).
Switching to epoll *drastically* reduces the CPU usage in two ways:
1) It's massively faster by design: there's a single setup and
communication of all the polling details to the kernel which we only
have to do when our set of zmq sockets changes (which is relatively
rare).
2) We can further reduce CPU time because epoll tells us *which* sockets
need attention, and so if only 1 connection out of the 2000 sent us
something we can only bother checking that single socket for
messages. (In theory we can do the same with zmq::poll by querying
for events available on the socket, but in practice it doesn't
improve anything over just trying to read from them all).
In my straight zmq test script, using epoll instead reduced CPU usage in
the sends-every-1ms scenario from a constant pegged 100% of a core to an
average of 2-3% of a single core. (Moreover this CPU usage level didn't
noticeably change when using 20k connections instead of 2k).
MAX_SOCKETS wasn't working properly because ZMQ uses it when the context
is initialized, which happens when the first socket is constructed on
that context.
For OxenMQ, we had several sockets constructed on the context during
OxenMQ construction, which meant the context_t was being initialized
during OxenMQ construction, rather than during start(), and so setting
MAX_SOCKETS would have no effect and you'd always get the default.
This fixes it by making all the member variable zmq::socket_t's
default-constructed, then replacing them with proper zmq::socket_t's
during startup() so that we also defer zmq::context_t initialization to
the right place.
A second issue found during testing (also fixed here) is that the socket
worker threads use to communicate to the proxy could fail if the worker
socket creation would violate the zmq max sockets limit, which wound up
throwing an uncaught exception and aborting. This pre-initializes (but
doesn't connect) all potential worker threads sockets during start() so
that the lazily-initialized worker thread will have one already set up
rather than having to create a new one (which could fail).
oxen-mq's export command errored when using a parent oxenc target in a
submodule oxen-mq; add an intermediate IMPORTED target so that cmake
knows it doesn't have to export the oxenc dependency as well.
For some reason using target_compile_features doesn't properly set up
C++17 flags in the generate compile_commands.json, which then breaks
clang-complete. Switch to use properties instead, which works.
PkgConfig::xyz won't exist before 3.21 if xyz doesn't require any flags
(which is common for a system-installed header-only library like oxenc).
(CMake bug 22180)
bt_*, hex, base32z, base64 all moved to oxen-encoding a while ago; this
finishes the move by removing them from oxenmq and instead making oxenmq
depend on oxen-encoding.
libzmq's IPv6 support is buggy when also using DNS hostname: in
particular, if you try to connect to a DNS name that has an IPv6
address, then zmq will *only* try an IPv6 connection, even if the local
client has no IPv6 connectivity, and even if the remote is only
listening on its IPv4 address.
This is much too unreliable to enable by default.
Makes some send/connection options more robust to "do nothing" runtime
value, which the Python wrapper needs.
Also found a bunch of doc typos and fixes.
Bump version to 1.2.8 so that new pyoxenmq can build-depend on it.
inproc support is special in zmq: in particular it completely bypasses
the auth layer, which causes problems in OxenMQ because we assume that a
message will always have auth information (set during initial connection
handshake).
This adds an "always-on" inproc listener and adds a new `connect_inproc`
method for a caller to establish a connection to it.
It also throws exceptions if you try to `listen_plain` or `listen_curve`
on an inproc address, because that won't work for the reasons detailed
above.
Decoding into a std::byte output iterator was not working because the
`*out++ = val` assignment doesn't work when the output is std::byte and
val is a char/unsigned char/uint8_t. Instead we need to explicitly
cast, but figuring out what we have to cast to is a little bit tricky.
This PR makes it work (and bumps the version for this and the is_hex
fix).
This is making lokimq headers & static lib get installed when lokimq is
used as a project subdirectory, which is very annoying.
This adds an option for enabling the install lines, and only enables it
if doing a shared library or a top-level project build.
Add var::get/var::visit implementations of std::get/std::visit that get
used if compiling for an old macos target, and use those.
The issue is that on a <10.14 macos target Apple's libc++ is missing
std::bad_variant_access, and so any method that can throw it (such as
std::get and std::visit) can't be used. This workaround is ugly, but
such is life when you want to support running on Apple platforms.
Various small C++17 code improvements.
Replace mapbox::variant with std::variant.
Remove the bt_u64 type wrapper; instead we know have `bt_value` which
wraps a variant holding both int64_t and uint64_t, and has contructors
to send signed/unsigned integer types into the appropriate one.
lokimq::get_int checks both as appropriate during extraction.
As a side effect this means we no longer do the uint64_t -> int64_t
conversion on the wire, ever, without needing the wrapper; although this
can break older versions sending large positive integers (i.e. larger
than int64_t max) those weren't actually working completely reliably
with mapbox variant anyway, and the one place using such a value in loki
core (in a checksum) is already fully upgraded across the network
(currently using bt_u64, but always sending a positive value on the
wire).
Removes lokimq::string_view (the type alias is still provided for
backwards compat, but now is always std::string_view).
Bump version (on dev branch) to 1.2.0
This class extends the basic ZMQ addresses with addresses that handle
parsing and generating of addresses with embedded curve pubkeys of
various forms, along with a QR-friendly address generator.
This replaces the recognition of SN status to be checked per-command
invocation rather than on connection. As this breaks the API quite
substantially, though doesn't really affect the functionality, it seems
suitable to bump the minor version.
This requires a fundamental shift in how the calling application tells
LokiMQ about service nodes: rather than using a callback invoked on
connection, the application now has to call set_active_sns() (or the
more efficient update_active_sns(), if changes are readily available) to
update the list whenever it changes. LokiMQ then keeps this list
internally and uses it when determining whether to invoke.
This release also brings better request responses on errors: when a
request fails, the data argument will now be set to the failure reason,
one of:
- TIMEOUT
- UNKNOWNCOMMAND
- NOT_A_SERVICE_NODE (the remote isn't running in SN mode)
- FORBIDDEN (auth level denies the request)
- FORBIDDEN_SN (SN required and the remote doesn't see us as a SN)
Some of these (UNKNOWNCOMMAND, NOT_A_SERVICE_NODE, FORBIDDEN) were
already sent by remotes, but there was no connection to a request and so
they would log a warning, but the request would have to time out.
These errors (minus TIMEOUT, plus NO_REPLY_TAG signalling that a command
is a request but didn't include a reply tag) are also sent in response
to regular commands, but they simply result in a log warning showing the
error type and the command that caused the failure when received.