This updates the coinbase transactions to reward service nodes
periodically rather than every block. If you recieve a service node
reward this reward will be delayed x blocks, if you receive another
reward to the same wallet before those blocks have been completed it
will be added to your total and all will be paid out after those x
blocks has passed.
For example if our batching interval is 2 blocks:
Block 1 - Address A receives reward of 10 oxen - added to batch
Block 2 - Address A receives reward of 10 oxen - added to batch
Block 3 - Address A is paid out 20 oxen.
Batching accumulates a small reward for all nodes every block
The batching of service node rewards allows us to drip feed rewards
to service nodes. Rather than accruing each service node 16.5 oxen every
time they are pulse block leader we now reward every node the 16.5 /
num_service_nodes every block and pay each wallet the full amount that
has been accrued after a period of time (Likely 3.5 days).
To spread each payment evenly we now pay the rewards based on the
address of the recipient. This modulus of their address determines which
block the address should be paid and by setting the interval to our
service_node_batching interval we can guarantee they will be paid out
regularly and evenly distribute the payments for all wallets over this
Converts all use of boost::filesystem to std::filesystem.
For macos and potentially other exotic systems where std::filesystem
isn't available, we use ghc::filesystem instead (which is a drop-in
replacement for std::filesystem, unlike boost::filesystem).
This also greatly changes how we handle filenames internally by holding
them in filesystem::path objects as soon as possible (using
fs::u8path()), rather than strings, which avoids a ton of issues around
unicode filenames. As a result this lets us drop the boost::locale
dependency on Windows along with a bunch of messy Windows ifdef code,
and avoids the need for doing gross boost locale codecvt calls.
This allows inspecting, generating, and restoring both Ed25519 and
legacy (naming in line with PR #1240 now avoiding the non-Ed25519 keys
by default) secret key files as needed by lokid:
$ ./loki-sn-keys generate key_ed25519
Generated SN Ed25519 secret key in key_ed25519
Public key: 8f94f6bbc1d5876484309fc7fd41396c6bd66c4631a34f04895887547cb374dd
X25519 pubkey: f3e42d1aff8db8232433ec86e3f56cdda44d4c510144f9b6786520175b08c97f
Lokinet address: t6kxpq6b4sdsjbbou9d94oj3pti7c5ngggtw6brjmndie9fuquqo.snode
$ ./loki-sn-keys legacy key
Generated SN legacy private key in key
Public key: f3a513ddfbe946c79c62c27cca2260b80606a2da1b4f75ba16c793a5750a8510
$ ./loki-sn-keys show key
key (legacy SN keypair)
==========
Private key: c8c17cc296c3e8bb603098b5d2535ad86662cdb24cd4c785c0485d0a490a2356
Public key: f3a513ddfbe946c79c62c27cca2260b80606a2da1b4f75ba16c793a5750a8510
$ ./loki-sn-keys show key_ed25519
key_ed25519 (Ed25519 SN keypair)
==========
Secret key: 3f078900bac5f3e97d5fe451a6e332826d29aae2e13fec895c1b527af88093c3
Public key: 8f94f6bbc1d5876484309fc7fd41396c6bd66c4631a34f04895887547cb374dd
X25519 pubkey: f3e42d1aff8db8232433ec86e3f56cdda44d4c510144f9b6786520175b08c97f
Lokinet address: t6kxpq6b4sdsjbbou9d94oj3pti7c5ngggtw6brjmndie9fuquqo.snode
$ ./loki-sn-keys restore key_ed25519-2
Enter the Ed25519 secret key:
3f078900bac5f3e97d5fe451a6e332826d29aae2e13fec895c1b527af88093c3
Public key: 8f94f6bbc1d5876484309fc7fd41396c6bd66c4631a34f04895887547cb374dd
X25519 pubkey: f3e42d1aff8db8232433ec86e3f56cdda44d4c510144f9b6786520175b08c97f
Lokinet address: t6kxpq6b4sdsjbbou9d94oj3pti7c5ngggtw6brjmndie9fuquqo.snode
Is this correct? Press Enter to continue, Ctrl-C to cancel.
Saved secret key to key_ed25519-2
$ ./loki-sn-keys restore-legacy key-2
Enter the legacy SN private key:
c8c17cc296c3e8bb603098b5d2535ad86662cdb24cd4c785c0485d0a490a2356
Public key: f3a513ddfbe946c79c62c27cca2260b80606a2da1b4f75ba16c793a5750a8510
Is this correct? Press Enter to continue, Ctrl-C to cancel.
Saved secret key to key-2
The archaic (i.e. decade old) cmake usage here really got in the way of
trying to properly use newer libraries (like lokimq), so this undertakes
overhauling it considerably to make it much more sane (and significantly
reduce the size).
I left more of the architecture-specific bits in the top-level
CMakeLists.txt intact; most of the efforts here are about properly
loading dependencies, specifying dependencies and avoiding a whole pile
of cmake antipatterns.
This bumps the required cmake version to 3.5, which is what xenial comes
with.
- extensive use of interface libraries to include libraries,
definitions, and include paths
- use Boost::whatever instead of ${Boost_WHATEVER_LIBRARY}. The
interface targets are (again) much better as they also give you any
needed include or linking flags without needing to worry about them.
- don't list header files when building things. This has *never* been
correct cmake usage (cmake has always known how to wallet_rpc_headers
the headers that .cpp files include to know about build changes).
- remove the loki_add_library monstrosity; it breaks target names and
makes compiling less efficient because the author couldn't figure out
how to link things together.
- make loki_add_executable take the output filename, and set the output
path to bin/ and install to bin because *every single usage* of
loki_add_executable was immediately followed by setting the output
filename and setting the output path to bin/ and installing to bin.
- move a bunch of crap that is only used in one particular
src/whatever/CMakeLists.txt into that particular CMakeLists.txt instead
of the top level CMakeLists.txt (or src/CMakeLists.txt).
- Remove a bunch of redundant dependencies; most of them look like they
were just copy-and-pasted in, and many more aren't needed (since they
are implied by the PUBLIC linking of other dependencies).
- Removed `die` since it just does a FATAL_ERROR, but adds color (which
is useless since CMake already makes FATAL_ERRORs perfectly visible).
- Change the way LOKI_DAEMON_AND_WALLET_ONLY works to just change the
make targets to daemon and simplewallet rather than changing the build
process (this should make it faster, too, since there are various other
things that will be excluded).
* Cleanup and undoing some protocol breakages
* Simplify expiration of nodes
* Request unlock schedules entire node for expiration
* Fix off by one in expiring nodes
* Undo expiring code for pre v10 nodes
* Fix RPC returning register as unlock height and not checking 0
* Rename key image unlock height const
* Undo testnet hardfork debug changes
* Remove is_type for get_type, fix missing var rename
* Move serialisable data into public namespace
* Serialise tx types properly
* Fix typo in no service node known msg
* Code review
* Fix == to >= on serialising tx type
* Code review 2
* Fix tests and key image unlock
* Add command to print locked key images
* Update ui to display lock stakes, query in print cmd blacklist
* Modify print stakes to be less slow
* Remove autostaking code
* Refactor staking into sweep functions
It appears staking was derived off stake_main written separately at
implementation at the beginning. This merges them back into a common
code path, after removing autostake there's only some minor differences.
It also makes sure that any changes to sweeping upstream are going to be
considered in the staking process which we want.
* Display unlock height for stakes
* Begin creating output blacklist
* Make blacklist output a migration step
* Implement get_output_blacklist for lmdb
* In wallet output selection ignore blacklisted outputs
* Apply blacklisted outputs to output selection
* Fix broken tests, switch key image unlock
* Fix broken unit_tests
* Begin change to limit locked key images to 4 globally
* Revamp prepare registration for new min contribution rules
* Fix up old back case in prepare registration
* Remove debug code
* Cleanup debug code and some unecessary changes
* Fix migration step on mainnet db
* Fix blacklist outputs for pre-existing DB's
* Remove irrelevant note
* Tweak scanning addresses for locked stakes
Since we only now allow contributions from the primary address we can
skip checking all subaddress + lookahead to speed up wallet scanning
* Define macro for SCNu64 for Mingw
* Fix failure on empty DB
* Add missing error msg, remove contributor from stake
* Improve staking messages
* Flush prompt to always display
* Return the msg from stake failure and fix stake parsing error
* Tweak fork rules for smaller bulletproofs
* Tweak pooled nodes minimum amounts
* Fix crash on exit, there's no need to store on destructor
Since all information about service nodes is derived from the blockchain
and we store state every time we receive a block, storing in the
destructor is redundant as there is no new information to store.
* Make prompt be consistent with CLI
* Check max number of key images from per user to node
* Implement error message on get_output_blacklist failure
* Remove resolved TODO's/comments
* Handle infinite staking in print_sn
* Atoi->strtol, fix prepare_registration, virtual override, stale msgs
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
Only for pre rct for obvious reasons.
Note: DO NOT use a known spent list which includes outputs
which are not known spent. If the list includes any output
that's just strongly thought to be spent, but not provably
so, you risk finding yourself unable to sync past the point
where that output is spent.
I estimate only 200 MB saved on current mainnet though,
unless the new blackballing rule unearths a good amount of
large-amount-set extra spent outs.
42397359 Fixup 32bit arm build (TheCharlatan)
a06d2581 Fix Windows build (TheCharlatan)
ecaf5b3f Add libsodium to the packages, the arm build was complaining about it. (TheCharlatan)
cbbf4d24 Adapt translations to upstream changes (TheCharlatan)
db571546 Updated pcsc url (TheCharlatan)
f0ba19fd Add lrelease to the depends (TheCharlatan)
cfb30462 Add Miniupnp submodule (TheCharlatan)
5f7da005 Unbound is now a submodule. Adapt depends for this. (TheCharlatan)
d6b9bdd3 Update readmes to reflect the usage of depends (TheCharlatan)
56b6e41e Add support for apple and arm building (TheCharlatan)
29311fd1 Disable stack unwinding for mingw32 depends build. (TheCharlatan)
8db3d573 Modify depends for monero's dependencies (TheCharlatan)
0806a23a Initial depends addition (TheCharlatan)
Add pcsc-lite to linux builds
Fixup windows icu4c linking with depends, the static libraries have an 's' appended to them
Compiling depends arm-linux-gnueabihf will allow you to compile armv6zk monero binaries
It scans for known spent outputs and stores their public keys
in a database which can then be read by the wallet, which can
then avoid using those as fake outs in new transactions.
Usage: monero-blockchain-blackball db1 db2...
This uses the shared database in ~/.shared-ringdb
default install targets
Binaries available to download on https://getmonero.org/downloads/ as
embedding monerod, monero-wallet-{cli,rpc} and
monero-blockchain-{ex,im}port.
This change synchronise download results with a manual build from
source
71e28760 debug_utilities: only build for debug builds (moneromooo-monero)
55e150ff debug_utilities: new object-sizes debug tool (moneromooo-monero)
fbaf5375 cn_deserialize: move to new debug_utilities subdirectory (moneromooo-monero)
Quick test with the first 56569 blocks from mainnet
version verify batch time
old 0 200 1:16
new 0 200 0:57
old 0 5000 0:53
new 0 5000 0:51
old 1 200 est > 1h
new 1 200 10:21
old 1 5000 est > 1h
new 1 5000 8:27