There's a very rare race condition where a tagged thread doesn't seem to
exist when the proxy tries syncing startup with them, and so the proxy
thread hangs in startup.
This addresses it by avoiding looking at the `proxy_thread` variable
(which probably isn't thread safe) in the worker's startup, and
signalling the you-need-to-shutdown condition via a third option for the
(formerly boolean) `tagged_go`.
MAX_SOCKETS wasn't working properly because ZMQ uses it when the context
is initialized, which happens when the first socket is constructed on
that context.
For OxenMQ, we had several sockets constructed on the context during
OxenMQ construction, which meant the context_t was being initialized
during OxenMQ construction, rather than during start(), and so setting
MAX_SOCKETS would have no effect and you'd always get the default.
This fixes it by making all the member variable zmq::socket_t's
default-constructed, then replacing them with proper zmq::socket_t's
during startup() so that we also defer zmq::context_t initialization to
the right place.
A second issue found during testing (also fixed here) is that the socket
worker threads use to communicate to the proxy could fail if the worker
socket creation would violate the zmq max sockets limit, which wound up
throwing an uncaught exception and aborting. This pre-initializes (but
doesn't connect) all potential worker threads sockets during start() so
that the lazily-initialized worker thread will have one already set up
rather than having to create a new one (which could fail).
oxen-mq's export command errored when using a parent oxenc target in a
submodule oxen-mq; add an intermediate IMPORTED target so that cmake
knows it doesn't have to export the oxenc dependency as well.
We can just leave the dangling pointer value in the `run` object: even
though we just deleted it, there's no need to reset this value because
it will never be used again. (And even if we did, we don't check
against nullptr anyway so having a nullptr here doesn't make anything
safter than a dangling pointer).
The assignment (into the variant) uses a small amount of CPU (via
std::variant), so better for performance to just leave it dangling.
Internal messages (control messages, worker messages) are always 3 parts
or less, so we can optimize by using a stack allocated std::array for
those cases rather than needing to continually clear and expand a heap
allocated vector.
Change the internal worker routing id to be "w" followed by the raw
integer bytes, so that we can just memcpy them into a uint32_t rather
than needing to do str -> integer conversion on each received worker
message.
(This also eliminates a vestigal call into oxenc internals).
I don't know what this set was originally meant to be doing, but it
currently does nothing (except adding overhead).
The comment says it "owns" the instances but that isn't really true; the
instances effectively already manage themselves as they pass the pointer
through the communications between proxy and workers.
This adds a much simpler `Job` implementation of `Batch` that is used
for simple no-return, no-completion jobs (as are initiated via
`omq.job(...)`).
This reduces the overhead involved in constructing/destroying the Batch
instance for these common jobs.
For some reason using target_compile_features doesn't properly set up
C++17 flags in the generate compile_commands.json, which then breaks
clang-complete. Switch to use properties instead, which works.
PkgConfig::xyz won't exist before 3.21 if xyz doesn't require any flags
(which is common for a system-installed header-only library like oxenc).
(CMake bug 22180)
bt_*, hex, base32z, base64 all moved to oxen-encoding a while ago; this
finishes the move by removing them from oxenmq and instead making oxenmq
depend on oxen-encoding.
libzmq's IPv6 support is buggy when also using DNS hostname: in
particular, if you try to connect to a DNS name that has an IPv6
address, then zmq will *only* try an IPv6 connection, even if the local
client has no IPv6 connectivity, and even if the remote is only
listening on its IPv4 address.
This is much too unreliable to enable by default.