I realized after merging the previous PR that it is difficult to
correctly pass ownership into a timer, because something like:
TimerID x = omq.add_timer([&] { omq.cancel_timer(x); }, 5ms);
doesn't work when the timer job needs to outlive the caller. My next
approach was:
auto x = std::make_shared<TimerID>();
*x = omq.add_timer([&omq, x] { omq.cancel_timer(*x); }, 5ms);
but this has two problems: first, TimerID wasn't default constructible,
and second, there is no guarantee that the assignment to *x happens
before (and is visible to) the access for the cancellation.
This commit fixes both issues: TimerID is now default constructible, and
an overload is added that takes the lvalue reference to the TimerID to
set rather than returning it (and guarantees that it will be set before
the timer is created).
Updates `add_timer` to return a new opaque TimerID object that can later
be passed to `cancel_timer` to cancel an existing timer.
Also adds timer tests, which was omitted (except for one in the tagged
threads section), along with a new test for timer deletion.
Storage server, in particular, needs to disable pubkey-based routing on
its connection to oxend (because it is sharing oxend's own keys), but
wants it by default for SS-to-SS connections. This allows the oxend
connection to turn it off so that we don't have oxend omq connections
replacing each other.
This provides an interface for sending a reply to a message later (i.e.
after the Message& itself is no longer valid) by using a new
`send_later()` method of the Message instance that returns an object
that can properly route replies (and can outlive the Message it was
called on).
Intended use is:
run_this_lambda_later([send=msg.send_later()] {
send.reply("content");
});
which is equivalent to:
run_this_lambda_later([&msg] {
msg.send_reply("content");
});
except that it works properly even if the lambda is invoked beyond the
lifetime of `msg`.
Decoding into a std::byte output iterator was not working because the
`*out++ = val` assignment doesn't work when the output is std::byte and
val is a char/unsigned char/uint8_t. Instead we need to explicitly
cast, but figuring out what we have to cast to is a little bit tricky.
This PR makes it work (and bumps the version for this and the is_hex
fix).
`is_hex()` is a bit misleading as `from_hex()` requires an even-length
hex string, but `is_hex()` also allows odd-length hex strings, which
means currently callers should be doing `if (lokimq::is_hex(str) &&
str.size() % 2 == 0)`, but probably aren't.
Since the main point of `lokimq/hex.h` is for byte<->hex conversions it
doesn't make much sense to allow `is_hex()` to return true for something
that can't be validly decoded via `from_hex()`, thus this PR changes it
to return false.
If someone *really* wants to test for an odd-length hex string (though
I'm skeptical that there is a need for this), this also exposes
`is_hex_digit` so that they could use:
bool all_hex = std::all_of(str.begin(), str.end(), lokimq::is_hex_digit<char>)
This is making lokimq headers & static lib get installed when lokimq is
used as a project subdirectory, which is very annoying.
This adds an option for enabling the install lines, and only enables it
if doing a shared library or a top-level project build.
The thread_local `std::map` here can end up being destructed *before*
the LokiMQ instance (if both are being destroyed during thread joining),
in which case we segfault by trying to use the map. Move the owning
container into the LokiMQ instead (indexed by the thread) to prevent
that.
Also cleans this code up by:
- Don't close control sockets from the proxy thread; socket_t's aren't
necessarily thread safe so this could be causing issues where we trouble
double-closing or using a closed socket.
- We can just let them get closed during destruction of the LokiMQ.
- Avoid needing shared_ptr's; instead we can just use a unique pointer
with raw pointers in the thread_local cache. This simplifies closing
because all closing will happen during the LokiMQ destruction.
Apple, in particular, often fails tests with an address already in use
if attempt to reuse a port that the process just closed, because it is a
wonderful OS.
Add var::get/var::visit implementations of std::get/std::visit that get
used if compiling for an old macos target, and use those.
The issue is that on a <10.14 macos target Apple's libc++ is missing
std::bad_variant_access, and so any method that can throw it (such as
std::get and std::visit) can't be used. This workaround is ugly, but
such is life when you want to support running on Apple platforms.
On the wire they are just lists, but this lets you put tuples onto and
pull tuples off of the wire. (Also supports std::pair).
Supports direct serialization (via bt_serialize()/bt_deserialize()),
list/dict consumer deserialization, and conversion from a bt_value or
bt_list via a new bt_tuple() function.
data_parts() wasn't currently used anywhere, and was broken: it was
calling bt_deserialize which was just wrong.
This repurposes it to take iterators over strings (or string-like types)
and append those parts as message parts.
Also adds tests for it.
If the LokiMQ object gets destroyed before having called `start()` then
we'd end up destroying the threads for tagged workers without joining
them. This listens on the internal worker socket (normally the domain
of the proxy thread) and tells them to QUIT if such a destruction
happens.
This allows mixing some outside task into the lokimq job queue for a
category (queued up with native LMQ requests for that category) for use
when there is some external process that is able to generate messages.
For example, the most immediate use for this is to allow an HTTP server
to handle incoming RPC requests and, as soon as they arrive, inject them
into LokiMQ's queue for the "rpc" category so that native LMQ rpc
requests and HTTP rpc requests share the same thread pool and queue.
These injected jobs bypass all of LokiMQ's authentication and response
mechanisms: that's up to the invoked callback itself to manage.
Injected tasks are somewhat similar to batch jobs, but unlike batch jobs
the are queued and prioritized as ordinary external LokiMQ requests.
(Batch jobs, in contrast, have a higher scheduling priority, no queue
limits, and typically a larger available thread pool).