Everscale node development update: Persistent state, ADNL, databases, and more

Everscale node development update: Persistent state, ADNL, databases, and more

Over the past three months, developers have released three new versions of the Everscale node: 42, 43, 44. We talked to the team involved in this work and found out more about the main purposes of the implemented updates.

The latest releases are mainly aimed at improving performance in the node. This applies to several major components listed below.

Persistent state

The first component of the Everscale node is the persistent state. In the Everscale blockchain, validators do not keep a history of all blocks, but only a zero state’s proof chain and the current state of the network. But intermittently, a node saves a full blockchain state to a new persistent state file. With these files, new nodes that join the network can sync with the blockchain.

These are validator nodes that create new persistent states. The developers discovered a few suboptimal parameters that were fixed so that persistent state will continue to be generated when the blockchain grows by multiples. According to conducted tests, after the update, persistent state in the Everscale node can be generated for the much bigger blockchain than the current size.

Databases

The second component is local databases. Nodes use them to store current states and other information. The database reading process is a bottleneck that has been optimized over the last few releases, mainly via the data caching algorithm. Caching is important because it allows a node to quickly access the information it needs. 

As a result of the changes, caching has become more efficient with cache synchronization. Two new modes of of the caching operation have been added: “Moderate” and “Off”. In the latter, nothing is cached, and Moderate is more “smart”: cached cells are saved asynchronously, while their total number is limited. This is now the recommended mode for a node in Everscale.

Before, there was only a “Full” mode, in which, due to the large size of the state, the caching process could take up all of the validator's RAM.

We also limited the size and maximum number of rocksdb log files.

Collation and messaging

The next part of adjustments concerns collation, i.e. the mechanism of block creation. We made two major fixes.

First, we created a new algorithm for processing internal messages. This can be explained with an example. When Shard A sends a message to Shard B, this message is stored in the outgoing queue. After Shard B executes the message, Shard A must clear it from the outgoing queue. Prior to this, the clearing is done by a random access algorithm. This works well for small queues, but if it is large, then the previous algorithm would cause the queue to go a long time without being cleared of internal messages. 

We have redesigned the new cleaning algorithm: now the queue is built in the form of a sorted tree by logical time of messages. This way it is possible to quickly find messages executed in other shards and successfully remove them. After that a collation mechanism proceeds.

Second: we also have changed the processing of external messages. Earlier they were also processed in random order. Now this randomness has been eliminated, and messages are processed in the order in which they arrive to the queue. This has stabilized the whole process.

REMP

A number of updates are related to the upcoming launch of REMP, which is one of the important milestones of the Everscale roadmap.

Metrics collection

The mechanism for collecting metrics from nodes has also been redesigned. This is an optional feature that you can enable and collect metrics about the internal workings of a node, such as queue time, Collation time and so on. Before, there was only one mechanism to collect this data, statsd. After refactoring, we added a way to use it directly through Prometheus. We also made it easier to add new providers to collect metrics.

Shutdown

The validating node shutdown process has been significantly sped up and the security of the procedure increased.

ADNL

One more change: we added a function to reset the ADNL channel. When two nodes establish a channel connection, they exchange installing packages with each other, installing a single AES cipher key, and a channel’s ID. Then, when one node sends packages to another, handling is made not with a node’s ADNL address, but with a unique channel ID. If one node reboots, it loses a channel ID, but another node keeps using it to send more data packages and requests. If this exchange is one-sided, it leads to a de-facto failure of connection. To avoid this, we added a function of sending data that bypasses an ADNL channel once one node of the pair becomes unresponsive for some time. This means that internal messages will be sent through broken channels less often.

Tools

In the Everscale protocol, there are special tools that are utilized for fetching data on the blockchain state straight from a node. A new command getstatsnew was added to the console to get this data. At the same time, we added several methods for getting some new data types via the node API.

Broadcast algorithm

We are working on changing the format of the Broadcast model in Overlay. Right now, the model selects random neighbors based on a constant value and sends out messages to them. A new change will limit the number of message exchanges, and the number of neighbors receiving a message will be calculated based on a smart algorithm instead of a constant.

In the last release, we added backward compatibility for both algorithms. After a series of tests and collecting metrics from them, we will inform the community about the readiness to switch to a new algorithm.

To keep an eye on further Everscale node updates, subscribe to the Evernode channel on Telegram.

Read More