This uses EXCLUDE_FROM_ALL when BUILD_DEBUG_UTILS is not on so that they
don't get built but they *can* be built with a `make cn_deserialize` (or
whatever target you want) from a build dir.
Also adds them to a couple drone builds to make sure they build.
-Ofast turns on -ffast-math, which allows gcc to do IEEE-violating FP
math optimizations. The consequence of this is that it means a Release
build and a Debug build produce *different* difficulty values. (It may
also have contributed to difficulty value divergence).
Turn it off because we are not Gentoo.
There is actually little point of setting CMAKE_CXX_FLAGS_RELEASE at all
here: adding `-DNDEBUG` is already the default for a cmake release
build, and cmake similarly has a release build default optimization
level (which I think is `-O3` -- which is already what we're doing with
`-Ofast` on Linux, except that `-Ofast` turns on unsafe stuff on top of
`-O3`).
For Debug build, this removes -Og and lets cmake use its default (-O0),
which is usually better when developing: lack of optimization means
faster binaries, are easier to trace. The downside is enormous binaries
(~250MB), but that seems at least manageable for a debug build.
This makes it a little clearer what failed when something fails.
Ideally we'd also be able to distinguish between build and test, but
currently that's not practical to do with drone: between each stage of
the build the docker image gets removed and replaced with the next
stage's image, which means we'd have to build up a new docker image with
dependencies before running the tests.
(Being able to have multiple stages in one docker image is a drone
upstream wishlist item).
I've uploaded the various deps to https://builds.lokinet.dev/deps so as
not to hit the upstream mirrors so much (also reduces the reliance on
github, which is sometimes flakey).
This is prefixed onto the existing URLs so that if it isn't found we
fall back to the upstream URL.
Adds static drone builds for linux (built on bionic), mac, and Windows
(built with mingw32).
The builds get uploaded to https://builds.lokinet.dev
The linux and mac builds use LTO (which takes longer, but significantly
reduces binary size). The mingw32 build can probably also get there,
but currently fails with LTO when unbound tries linking against openssl
(it probably just needs a small patch to add magic -lwsock2 dep in the
right place in unbound).
The Mac binaries are built using a 10.13 deployment target. (I'm not
100% sure that this is sufficient -- it's possible we might have to also
push the magic mac deployment flag to the built dependencies).
From a cmake build dir (`make` used for simple example; can also be
something else, such as ninja):
make create_tarxz
make create_zip
creater a loki-<OS>[-arch]-x.y.z[-dev]-<GITHASH>.tar.xz or .zip.
make create_archive
decides what to do based on the build type: creates a .zip for a windows
build, a tar.xz for anything else. (We have been distributing the macOS
binaries as a .zip but that seems unnecessary: I tested on our dev mac
and a .tar.xz offers exactly the same UX as a .zip, but is noticeably
smaller).
From the top-level makefile there is also a new `make
release-full-static-archive` that does a full static build (include all
deps) and builds the archive.
This adds a static dependency script for libraries like boost, unbound,
etc. to cmake, invokable with:
cmake .. -DBUILD_STATIC_DEPS=ON
which downloads and builds static versions of all our required
dependencies (boost, unbound, openssl, ncurses, etc.). It also implies
-DSTATIC=ON to build other vendored deps (like miniupnpc, lokimq) as
static as well.
Unlike the contrib/depends system, this is easier to maintain (one
script using nicer cmake with functions instead of raw Makefile
spaghetti code), and isn't concerned with reproducible builds -- this
doesn't rebuild the compiler, for instance. It also works with the
existing build system so that it is simply another way to invoke the
cmake build scripts but doesn't require any external tooling.
This works on Linux, Mac, and Windows.
Some random comments on this commit (for preserving history):
- Don't use target_link_libraries on imported targets. Newer cmake is
fine with it, but Bionic's cmake doesn't like it but seems okay with
setting the properties directly.
- This rebuilds libzmq and libsodium, even though there is some
provision already within loki-core to do so: however, the existing
embedded libzmq fails with the static deps because it uses libzmq's
cmake build script, which relies on pkg-config to find libsodium which
ends up finding the system one (or not finding any), rather than the one
we build with DownloadLibSodium. Since both libsodium and libzmq are
faily simple builds it seemed easiest to just add them to the cmake
static build rather than trying to shoehorn the current code into the
static build script.
- Half of the protobuf build system ignores CC/CXX just because Google,
and there's no documentation anywhere except for a random closed bug
report about needing to set these other variables (CC_FOR_BUILD,
CXX_FOR_BUILD) instead, but you need to. Thanks Google.
- The boost build is set to output very little because even the minimum
-d1 output level spams ~15k lines of output just for the headers it
installs.
This makes three big changes to how translation files are generated:
- use Qt5 cmake built-in commands to do the translations rather than
calling lrelease directly. lrelease is often not in the path, while Qt5
cmake knows how to find and invoke it.
- Slam the resulting files into a C++ file using a cmake script rather
than needing to compile a .c file to generate C++ file. This is
simpler, but more importantly avoids the mess needed when cross
compiling of having to import a cmake script from an external native
build.
- In the actual generated files, use an unordered_map rather than a
massive list of static variable pointers.
The same struct `lazy_init` was being used in two different files linked
into the same binary, causing test failures depending on which one got
kept in the final binary. (Enabling LTO builds noticed the violation,
warned about it, and caused the spent_outputs.not_found test to fail).
The mismatch in the extern declaration spews warnings under LTO (quite
rightly). It's kind of surprising that the mismatched declaration and
implementation even worked.
- libzmq required version was much older than we actually require
- libnorm and libpgm are transitive dependencies of libzmq3, it makes no
sense to list them here (rather the libzmq3-dev package should depend on
them if they are needed).
- gtest is now always build from the submodule copy
- the readline dev package was wrong for debian/ubuntu.
CMake 3.9+ has generic LTO enabling code, switch to it.
Update required cmake version to 3.10 (3.9 is probably sufficient, but
3.10 is bionic's version that we're actually testing).
static_assert is required both by C++11 and C11; if we don't have a
standard compliant compiler then compilation should fail, not be hacked
like this.
The C++ version of this definition is particularly preposterous; the C
version is probably just covering up that the C code forget to include
the `<assert.h>` header where the `static_assert` macro is defined.
There's really no reason to submodule it - we work with pretty much any
libunbound version, and it's a very commonly available library
(comparable to sqlite3 or boost, which we don't submodule).
This removes the submodule and switches it to a hard dependency.
Python2 reached end of life and is being active removed from Linux
distributions; drop support for it in the test code, too.
- Stop using deprecated FindPythonInterp when we have cmake 3.12 (which
has a much better FindPython).
- Fix tests that might invoke Python 2, and remove Python2-isms
This triggers a pile of false positives from gtest and mapbox variant.
In the case of gtest, these were being hidden by including gtest as a
system include, which was simply disgusting.
The gtest version bundled inside tests/ is ancient (7 years old) and
doesn't build properly for some compilers.
Replace it with a current gtest submodule in external/.
Using an external project to build a subdirectory is gross, and moreover
it breaks if you are trying to use a custom compiler and uses the wrong
one (or just fails if a `c++` binary doesn't exist).
Since the builds appear to run just fine without this, just include it
via add_subdirectory instead.