Commit Graph

336 Commits

Author SHA1 Message Date
Jason Rhinelander a27961d787
Merge pull request #90 from jagerman/zmq-bump
Bump ZMQ to latest release & propagate flags to build
2023-10-26 13:50:30 -03:00
Jason Rhinelander 5878473f67
Bump ZMQ to latest release & propagate flags to build
- ZMQ 4.3.5
- The local zmq build was not propagating either ccache or CXX_FLAGS,
  and so was slower, and would not work properly if the parent was built
  using `-DCMAKE_CXX_FLAGS=-stdlib=libc++` (e.g. for a full llvm build,
  such as the one in lokinet).
2023-10-26 13:29:23 -03:00
Jason Rhinelander 68b3420bad
Update oxen-encoding to latest dev 2023-09-29 14:43:21 -03:00
Jason Rhinelander dc7fb35493
Merge pull request #88 from jagerman/epoll
epoll: always retrieve events from triggered sockets
2023-09-16 12:24:57 -03:00
Jason Rhinelander caadd35052
epoll: fix hang on heavily loaded sockets
This fixes a hang in the epoll code that triggers on heavy, bursty
connections (such as the live SPNS APNs notifier).

It turns out that side-effects of processing our sockets could leave
other sockets (that we processed earlier in the loop) in a
needs-attention state which we might not notice if we go back to
epoll_wait right away.  zmq::poll apparently takes care of this (and so
is safe to re-poll even in this state), but when we are using epoll we
need to worry about it by always checking for zmq events (which itself
has side effects) and, if we get any, re-enter the loop body immediately
*without* polling to deal with them.
2023-09-15 18:29:23 -03:00
Jason Rhinelander fd58ab9cac
Merge pull request #87 from jagerman/epoll
Add epoll support for Linux (huge proxy thread CPU reduction)
2023-09-14 16:22:51 -03:00
Jason Rhinelander 8f97add30f
Add epoll support for Linux
Each call to zmq::poll is painfully slow when we have many open zmq
sockets, such as when we have 1800 outbound connections (i.e. connected
to every other service node, as services nodes might have sometimes and
the Session push notification server *always* has).

In testing on my local Ryzen 5950 system each time we go back to
zmq::poll incurs about 1.5ms of (mostly system) CPU time with 2000 open
outbound sockets, and so if we're being pelted with a nearly constant
stream of requests (such as happens with the Session push notification
server) we incur massive CPU costs every time we finish processing
messages and go back to wait (via zmq::poll) for more.

In testing a simple ZMQ (no OxenMQ) client/server that establishes 2000
connections to a server, and then has the server send a message back on
a random connection every 1ms, we get atrocious CPU usage: the proxy
thread spends a constant 100% CPU time.  Virtually all of this is in the
poll call itself, though, so we aren't really bottlenecked by how much
can go through the proxy thread: in such a scenario the poll call uses
its CPU then returns right away, we process the queue of messages, and
return to another poll call.  If we have lots of messages received in
that time, though (because messages are coming fast and the poll was
slow) then we process a lot all at once before going back to the poll,
so the main consequences here are that:

1) We use a huge amount of CPU
2) We introduce latency in a busy situation because the CPU has to make
   the poll call (e.g. 1.5ms) before the next message can be processed.
3) If traffic is very bursty then the latency can manifest another
   problem: in the time it takes to poll we could accumulate enough
   incoming messages to overfill our internal per-category job queue,
   which was happening in the SPNS.

(I also tested with 20k connections, and the poll time scaling was
linear: we still processed everything, but in larger chunks because
every poll call took about 15ms, and so we'd have about 15 messages at a
time to process with added latency of up to 15ms).

Switching to epoll *drastically* reduces the CPU usage in two ways:

1) It's massively faster by design: there's a single setup and
   communication of all the polling details to the kernel which we only
   have to do when our set of zmq sockets changes (which is relatively
   rare).
2) We can further reduce CPU time because epoll tells us *which* sockets
   need attention, and so if only 1 connection out of the 2000 sent us
   something we can only bother checking that single socket for
   messages.  (In theory we can do the same with zmq::poll by querying
   for events available on the socket, but in practice it doesn't
   improve anything over just trying to read from them all).

In my straight zmq test script, using epoll instead reduced CPU usage in
the sends-every-1ms scenario from a constant pegged 100% of a core to an
average of 2-3% of a single core.  (Moreover this CPU usage level didn't
noticeably change when using 20k connections instead of 2k).
2023-09-14 15:03:15 -03:00
Jason Rhinelander e1b66ced48
Update oxen-encoding submodule 2023-08-28 18:46:54 -03:00
Jason Rhinelander 4f3ee28784
Bump version 2023-07-17 13:50:00 -03:00
Jason Rhinelander bd3e2cdfb0
Merge pull request #85 from jagerman/random-string-redux
Redo random string generation
2023-04-28 15:52:49 -03:00
Jason Rhinelander b8bb10eac5 Redo random string generation
This is probably slightly more efficient (as it avoids going through
uniform_int_distribution), but more importantly, won't trigger some of
Apple's new xcode buggy crap.
2023-04-04 12:16:43 -03:00
Jason Rhinelander ff0e515c51
Fix installed headers
- Remove more deprecated shim headers
- Remove the gone (and newly gone) headers from the install list
- Add missing pubsub.h to install list
2022-10-05 20:26:34 -03:00
Jason Rhinelander 2e308d4f43
Merge pull request #82 from oxen-io/fix-race-condition
Attempt to fix a race condition
2022-10-05 19:35:28 -03:00
Jason Rhinelander 445f214840
Fix a race condition with tagged thread startup
There's a very rare race condition where a tagged thread doesn't seem to
exist when the proxy tries syncing startup with them, and so the proxy
thread hangs in startup.

This addresses it by avoiding looking at the `proxy_thread` variable
(which probably isn't thread safe) in the worker's startup, and
signalling the you-need-to-shutdown condition via a third option for the
(formerly boolean) `tagged_go`.
2022-10-05 19:32:54 -03:00
Jason Rhinelander 358005df06
Merge pull request #80 from tewinget/pubsub
initial implementation of generic pub/sub management
2022-09-28 16:48:13 -03:00
Thomas Winget 85437d167b initial implementation of generic pub/sub management
Implements a generic pub/sub system for RPC endpoints to allow clients
to subscribe to things.

patch version bump

tests included and passing
2022-09-28 15:43:45 -04:00
Jason Rhinelander b26fe8cb04
Merge pull request #81 from jagerman/remove-deprecated
Remove deprecated code
2022-09-28 14:47:49 -03:00
Jason Rhinelander df19d1dd94
Add sid workaround
lsb_release -sc on sid currently prints 'n/a' because of debian bugs
1020893 and 1008735.  Add a workaround.

Also bumps clang builds to latest version.
2022-09-28 14:00:05 -03:00
Jason Rhinelander 25f714371b
Remove deprecated code
- Removes the old lokimq name compatibility shims
- Removes the oxenmq::bt* -> oxenc::bt* shim headers
2022-09-28 13:28:48 -03:00
Jason Rhinelander 0858dd278b
oxen-encoding submodule to latest tagged release 2022-08-31 12:00:07 -03:00
Jason Rhinelander 057685b7c0
Merge pull request #79 from jagerman/socket-limits
Fix zmq socket limit setting
2022-08-31 11:57:22 -03:00
Jason Rhinelander 3a3ffa7d23
Increase ulimit on macos
The test suite is now running out of file descriptors, because of
macos's default tiny limit.
2022-08-31 11:49:44 -03:00
Jason Rhinelander edcde9246a
Fix zmq socket limit setting
MAX_SOCKETS wasn't working properly because ZMQ uses it when the context
is initialized, which happens when the first socket is constructed on
that context.

For OxenMQ, we had several sockets constructed on the context during
OxenMQ construction, which meant the context_t was being initialized
during OxenMQ construction, rather than during start(), and so setting
MAX_SOCKETS would have no effect and you'd always get the default.

This fixes it by making all the member variable zmq::socket_t's
default-constructed, then replacing them with proper zmq::socket_t's
during startup() so that we also defer zmq::context_t initialization to
the right place.

A second issue found during testing (also fixed here) is that the socket
worker threads use to communicate to the proxy could fail if the worker
socket creation would violate the zmq max sockets limit, which wound up
throwing an uncaught exception and aborting.  This pre-initializes (but
doesn't connect) all potential worker threads sockets during start() so
that the lazily-initialized worker thread will have one already set up
rather than having to create a new one (which could fail).
2022-08-05 10:40:01 -03:00
Sean c854046684
Merge pull request #78 from darcys22/custom-formatters
Adds custom formatters for ConnectionID and AuthLevel
2022-08-04 11:03:01 +10:00
Sean Darcy c91e56cf2d adds custom formatter for OMQ structs that have to_string member 2022-08-04 10:50:02 +10:00
Jason Rhinelander 61b7505304
Update oxenc so that oxenc::oxenc target exists 2022-06-09 13:26:58 -03:00
Jason Rhinelander b0c3bd4ee9
fix linkage for submodule dep use 2022-05-30 13:28:52 -03:00
Jason Rhinelander fd95919704
Merge pull request #77 from jagerman/private-linking
Fix use of parent oxenc::oxenc target
2022-05-30 13:13:17 -03:00
Jason Rhinelander 4671af3ca0
Fix use of parent oxenc::oxenc target
oxen-mq's export command errored when using a parent oxenc target in a
submodule oxen-mq; add an intermediate IMPORTED target so that cmake
knows it doesn't have to export the oxenc dependency as well.
2022-05-30 13:07:49 -03:00
Jason Rhinelander c4b7aa9b23
Merge pull request #76 from jagerman/optimizations
Optimizations
2022-05-30 10:51:40 -03:00
Jason Rhinelander 115c5550ca
Bump version & embedded oxenc version 2022-05-24 16:15:39 -03:00
Jason Rhinelander ace6ea9d8e
Avoid unnecessary nullptr assignment
We can just leave the dangling pointer value in the `run` object: even
though we just deleted it, there's no need to reset this value because
it will never be used again.  (And even if we did, we don't check
against nullptr anyway so having a nullptr here doesn't make anything
safter than a dangling pointer).

The assignment (into the variant) uses a small amount of CPU (via
std::variant), so better for performance to just leave it dangling.
2022-05-12 12:48:46 -03:00
Jason Rhinelander 62a803f371
Add missing header
This was surely coming in implicitly already, but better to be explicit.
2022-05-12 12:48:15 -03:00
Jason Rhinelander d86ecb3a70
Use fixed vector for idle workers
Use a count + fixed size vector with a separate variable tracking the
size seems to perform slightly better than popping/pushing the vector.
2022-05-12 12:44:54 -03:00
Jason Rhinelander 45791d3a19
Use fixed array for known-small internal messages
Internal messages (control messages, worker messages) are always 3 parts
or less, so we can optimize by using a stack allocated std::array for
those cases rather than needing to continually clear and expand a heap
allocated vector.
2022-05-12 12:42:08 -03:00
Jason Rhinelander b8e4eb148f
Use raw index bytes in worker router
Change the internal worker routing id to be "w" followed by the raw
integer bytes, so that we can just memcpy them into a uint32_t rather
than needing to do str -> integer conversion on each received worker
message.

(This also eliminates a vestigal call into oxenc internals).
2022-05-12 12:38:13 -03:00
Jason Rhinelander fa6de369b2
Change std::queue to std::deque typedef
This shouldn't make any difference with an optimizing compiler, but
makes it easier a bit easier to experiment with different data structures.
2022-05-12 12:32:17 -03:00
Jason Rhinelander 371606cde0
Eliminate useless unordered_set
I don't know what this set was originally meant to be doing, but it
currently does nothing (except adding overhead).

The comment says it "owns" the instances but that isn't really true; the
instances effectively already manage themselves as they pass the pointer
through the communications between proxy and workers.
2022-05-12 12:25:46 -03:00
Jason Rhinelander 3a51713396
Add simpler Job subclass of Batch for simple jobs
This adds a much simpler `Job` implementation of `Batch` that is used
for simple no-return, no-completion jobs (as are initiated via
`omq.job(...)`).

This reduces the overhead involved in constructing/destroying the Batch
instance for these common jobs.
2022-05-12 12:20:51 -03:00
Jason Rhinelander 5c7f6504d2
Fix cmake compilation properties
For some reason using target_compile_features doesn't properly set up
C++17 flags in the generate compile_commands.json, which then breaks
clang-complete.  Switch to use properties instead, which works.
2022-05-12 12:15:30 -03:00
Jason Rhinelander 5a3c12e721
Merge pull request #74 from XutaxKamay/patch-1
Fix libzmq library linking
2022-04-07 11:15:24 -03:00
xutaxkamay f0c2222d6e Fix libzmq library linking
Fixes prefix for libzmq library output path
2022-04-07 07:14:23 +02:00
Jason Rhinelander 320a85ac0c
Merge pull request #73 from jagerman/cmake-pkgconfig-workaround
Cmake pkgconfig workaround
2022-03-30 16:56:20 -03:00
Jason Rhinelander 7fca36b3a9
Use liboxenc-dev in ci jobs 2022-03-30 16:21:03 -03:00
Jason Rhinelander bbdf4af98f
cmake work-around for cmake < 3.21
PkgConfig::xyz won't exist before 3.21 if xyz doesn't require any flags
(which is common for a system-installed header-only library like oxenc).

(CMake bug 22180)
2022-03-30 16:09:40 -03:00
Jason Rhinelander 77c4840273
Fix extra file in header install list 2022-02-07 14:41:51 -04:00
Jason Rhinelander d7f5efebc1
Merge pull request #72 from jagerman/oxenc
Use oxen-encoding and add compatibility shim headers
2022-02-07 14:39:59 -04:00
Jason Rhinelander a0a54ed461
Fix static build 2022-02-07 14:38:19 -04:00
Jason Rhinelander 045df9cb9b
Use oxen-encoding and add compatibility shim headers
bt_*, hex, base32z, base64 all moved to oxen-encoding a while ago; this
finishes the move by removing them from oxenmq and instead making oxenmq
depend on oxen-encoding.
2022-01-18 10:30:23 -04:00
Jason Rhinelander 3d178ce3ea
Merge pull request #71 from jagerman/disable-ipv6
Disable IPv6 by default
2021-12-02 19:06:50 -04:00