- Remove implicit `operator bool` from ec_point/public_key/etc. which
was causing all sorts of implicit conversion mess and bugs.
- Change ec_point/public_key/etc. to use a `std::array<unsigned char,
32>` (via a base type) rather than a C-array of char that has to be
reinterpret_cast<>'ed all over the place.
- Add methods to ec_point/public_key/etc. that make it work more like a
container of bytes (`.data()`, `.size()`, `operator[]`, `begin()`,
`end()`).
- Make a generic `crypto::null<T>` that is a constexpr all-0 `T`, rather
than the mishmash `crypto::null_hash`, crypto::null_pkey,
crypto:#️⃣:null(), and so on.
- Replace three metric tons of `crypto::hash blahblah =
crypto::null_hash;` with the much simpler `crypto::hash blahblah{};`,
because there's no need to make a copy of a null hash in all these
cases. (Likewise for a few other null_whatevers).
- Remove a whole bunch of `if (blahblah == crypto::null_hash)` and `if
(blahblah != crypto::null_hash)` with the more concise `if
(!blahblah)` and `if (blahblah)` (which are fine via the newly
*explicit* bool conversion operators).
- `crypto::signature` becomes a 64-byte container (as above) but with
`c()` and `r()` to get the c() and r() data pointers. (Previously
`.c` and `.r` were `ec_scalar`s).
- Delete with great prejudice CRYPTO_MAKE_COMPARABLE and
CRYPTO_MAKE_HASHABLE and all the other utter trash in
`crypto/generic-ops.h`.
- De-inline functions in very common crypto/*.h files so that they don't
have to get compiled 300 times.
- Remove the disgusting include-a-C-header-inside-a-C++-namespace
garbage from some crypto headers trying to be both a C and *different*
C++ header at once.
- Remove the toxic, disgusting, shameful `operator&` on ec_scalar, etc.
that replace `&x` with `reinterpret_cast x into an unsigned char*`.
This was pure toxic waste.
- changed some `<<` outputs to fmt
- Random other small changes encountered while fixing everything that
cascaded out of the above changes.
oxen::log::info(...), etc. are a bit too verbose; this simplifies them
to just `log::info(...)`, etc. by aliasing the `oxen::log` namespace
into most of the common namespaces we use in core.
This result is usage that is shorter but also reads better:
oxen::log::info(logcat, "blah: {}", 42);
log::info(logcat, "blah: {}", 42);
This replaces the current epee logging system with our oxen::log
library. It replaces the easylogging library with spdlog, removes the
macros and replaces with functions and standardises how we call the
logs.
This adds a new tx registration interpretation for HF19+ by repurposing
the fields of the registration:
- `expiration` becomes `hf_or_expiration`; for "new" registrations it
contains the hardfork number (e.g. 19 for HF19), but the data layout
on chain doesn't change: essentially we determine whether it's a new
registration based on whether the field is <= 255 (interpret as HF) or
not (interpret as expiration).
Changes in "new" registrations:
- stake amounts are atomic OXEN rather than portions. This lets us skip
a whole bunch of fiddling around with amounts that was necessary to
deal with integer truncation when converting between amounts and
portions.
- the fee in the registration data is now a value out of 10000 instead
of a portion (i.e. value out of 2^64-4). This limits fee precision to
a percentage with two decimal places instead of ~17 decimal places.
Internally we still convert this to a portion when processing the
registration for service_node_states, but this makes the registration
itself much simpler and easier to work with (as a human).
- HF19+ registrations no longer have an expiry timestamp (though they do
depend on the hardfork, so they "expire" whenever the next hard fork).
The expiry timestamp was really only there to avoid a registration
amount decreasing too much from the dropping staking requirement.
- Both types are registration are still permitted for HF19, but because
registrations with more than 4 contributors expose bugs in the portion
transfer code (that results in registrations become invalid),
old-style registrations are still limited to 4 contributors.
- HF19 will allow both old and new registrations, so that registrations
generated before the HF will still work, and so that we don't break
testnet which has various "old" registrations on it.
- Replace all cryptonote_config macros with constexpr variables. Some
become integer types, some become chrono types.
- generally this involved removing a "CRYPTONOTE_" prefix since the
values are now in the `cryptonote` namespace
- some constants are grouped into sub-namespaces (e.g.
cryptonote::p2p)
- deprecated constants (i.e. for old HFs) are in the `cryptonote::old`
namespace.
- all the magic hash key domain separating strings are now in
cryptonote::hashkey::WHATEVER.
- Move some economy-related constants to oxen_economy.h instead
- Replaced the BLOCKS_EXPECTED_IN_DAYS constexpr functions with more
straightforward `BLOCKS_PER_DAY` value (i.e. old
`BLOCKS_EXPECTED_IN_DAYS(10)` is now `BLOCKS_PER_DAY * 10`.
- Replaced `network_version` unscoped enum with a scoped enum
`cryptonote::hf`, replacing all the raw uint8_t values where it was
currently accepted with the new `hf` type.
- Made `network_type` a scoped enum so that it now has to be qualified
(network_type::TESTNET) and can't be arbitrarily/unintentionally
converted to/from an int.
- HARDFORK_WHATEVER macros have become cryptonote::feature::WHATEVER
constexpr hf values.
- Add `revision` to rpc hard_fork_info response
- Don't build trezor code at all (previously we were pointlessly
building an empty dummy lib).
All the encoding parts move to oxen-encoding recently; this updates to
the latest version of oxen-mq, adds oxen-encoding, and converts
everything to use oxenc headers rather than the oxenmq compatibility
shims.
Snode revisions are a secondary version that let us put out a mandatory
update for snodes that isn't a hardfork (and so isn't mandatory for
wallets/exchanges/etc.).
The main point of this is to let us make a 9.2.0 release that includes
new mandatory minimums of future versions of storage server (2.2.0) and
lokinet (0.9.4) to bring upgrades to the network.
This slightly changes the HF7 blocks to 0 (instead of 1) because,
apparently, we weren't properly checking the HF value of the
pre-first-hf genesis block at all before. (In practice this changes
nothing because genesis blocks are v7 anyway).
This also changes (slightly) how we check for hard forks: now if we skip
some hard forks then we still want to know the height when a hard fork
triggers. For example, if the hf tables contains {7,14} then we still
need to know that the HF14 block height also is the height that
activates HF9, 10, etc.
- Instead of the Pulse quorums validators recording participation
between each other- so failures may not manifest in a decommission until
the several common nodes align and agree to vote off the node.
Voting now occurs when blocks arrives, validators participating in the
generation of the block are marked. This is shared information between
all nodes syncing the chain so decommissions are more readily agreeable
and acted upon in the Obligation quorums immediately.
Currently where we need to look up a block by height we do:
1. get block hash for the given height
2. look up block by hash
This hits the lmdb layer and does:
3. look up height from hash in hashes-to-height table
4. look up block from height in blocks table
which is pointless. This commit adds a `get_block_by_height()` that
avoids the extra runaround, and converts code doing height lookups to
use the new method.
This updates failures to come in batches of 10 followed by 10
should-be-good blocks.
Blocks with an odd 2nd-last digit (xxx1x, xxx3x, etc.) are the ones
where we add failures.
- Late messages that arrive fail signature validation and log an error.
In reality this is not always an error, it means the Pulse node finished
the round or realised the round was going to fail earlier than another
node.
The late arriving messages refer to the previous round or block and
might actually validate ok, but just be late. This commit stores the
round history so that we can still validate these old messages and
silently ignore instead of printing errors.
- Previously we just submitted 1 signature that signed the contents of
the final block that required us to delay signature verification,
because, if we received the message before we were in the final stage we
would have to delay the verification because we have insufficient data
to verify the signature.
This means that when someone in the quorum receives and relays the
message, they can tamper the message and make it invalid (by changing
the round to something invalid for example) and cause other nodes in the
quorum to reject it, eventually, recording that the Service Node didn't
participate in the round and bias Service Nodes to decommissioning.
Instead of taking the shortcut and providing only 1 signature, we do the
same thing we do with all the other messages,
1. We signed the contents of the message- this proves that the message
originated from the Service Node it claims to have come from (preventing
any tampering).
2. The 2nd signature actually is the signature that signs the final
block and is included in the block for propagation in the network.
Doing so patches up the ability for intermediate relay nodes from tampering
the message.
- A non-participating node might be able to leak his way through
a stage and influence the receive count and cause participating nodes to
progress and (but) eventually fail and report some non-sensical error
that all messages were received but still failed.
- If a node is in the pulse quorum- but in the
locked in bitset (that indicates the nodes that are locked in to
participate in the round) does not include the node, go to sleep.
Previously the node would continue through the pulse rounds, but
messages would be ignored by everyon else in the quorums.
- Alternative pulse blocks must be verified against the quorum they belong to.
This updates alt_block_added hook in Service Node List to check the new Pulse
invariants and on passing allow the alt block to be stored into the DB until
enough blocks have been checkpointed.
- New reorganization behaviour for the Pulse hard fork. Currently reorganization
rules work by preferring chains with greater cumulative difficulty and or
a chain with more checkpoints. Pulse blocks introduces a 'fake' difficulty to
allow falling back to PoW and continuing the chain with reasonable difficulty.
If we fall into a position where we have an alt chain of mixed Pulse blocks
and PoW blocks, difficulty is no longer a valid metric to compare blocks (a
completely PoW chain could have much higher cumulative difficulty if hash
power is thrown at it vs Pulse chain with fixed difficulty).
So starting in HF16 we only reorganize when 2 consecutive checkpoints prevail
on one chain. This aligns with the idea of a PoS network that is
governed by the Service Nodes. The chain doesn't essentially recover until
Pulse is re-enabled and Service Nodes on that chain checkpoint the chain
again, causing the PoW chain to switch over.
- Generating Pulse Entropy no longer does a confusing +-1 to the height dance
and always begins from the top block. It now takes a block instead of a height
since the blocks may be on an alternative chain or the main chain. In the
former case, we have to query the alternative DB table to grab the blocks to
work.
- Removes the developer debug hashes in code for entropy.
- Adds core tests to check reorganization works
- We could do it earlier, but we need info for producing the payouts.
Adding it earlier and shuffling around more state to store is not worth
it just for early return to sleep, when we still have to wait for the next
round to start anyway.
- When not a participant in a pulse round, nodes will iterate Pulse
quorums until it is and then sleeps on the round. This can cause the
rounds to overflow at round > 255, if the Service Node is never selected
to participate and cause them to reject any Pulse Block even if some
prior quorum sent it validly.
- Moving the non-participant check down to after the round starts also
puts all the participation checks (is validator, is producer, is
neither) into one spot for improved clarity.
Otherwise the state machine loop only runs once, then on loop end it's
assigned the same value as context.state and terminates.
Setting it first allows the loop to detect when state has changed and
continue running.