First off, what is the blockchain trilemma?
The blockchain trilemma refers to the commonly held notion that DLTs can only achieve two of the following features: decentralization, security and scalability at any given time. In the blockchain industry, this concept is attributed to Vitalik Buterin. However, from a technological point of view, this issue has a much longer history.
The trilemma conundrum was around even before blockchains as we know them today were invented. Back in the 1980s, several scientists developed the CAP (consistency, availability, partition tolerance) theorem. Likewise, it stated that a decentralized data warehouse (modern day blockchain), can have only two of the three properties that we already mentioned above at a given time.
Nevertheless, the primary goal of any present-day blockchain is to create a secure network in a large ecosystem that will be able to handle a transactional bandwidth similar to that of the internet. Before delving deeper into the characteristics of the trilemma, let’s define scalability, security and decentralization in general terms:
1) Scalability refers to the ability of a blockchain to process a large number of transactions.
2) Security refers to a blockchain’s ability to protect itself from various types of attacks and double-spending.
3) Decentralization is a property that ensures that a network is not controlled by a select few entities.
In order to process a transaction, a network must first agree on the transaction’s validity. The agreement may take some time if there are a large number of participants in the queue. As a result, we can say that scalability is inversely proportional to decentralization while security parameters tend to remain constant.
Now, let’s suppose that two proof-of-work blockchains have the same degree of decentralization and consider security to be tied to a blockchain’s hash rate. The confirmation time decreases as the hash rate rises while scalability rises as security improves. As a result, scalability and security are proportionate in terms of constant decentralization.
Scalability versus Decentralization
As a result, most blockchains cannot simultaneously achieve all three features. This, in turn, requires compromises to be made. Ethereum is the best example of the trilemma in action. Not long ago due to the growth of decentralized finance (DeFi) services, Ethereum had to process a constantly increasing number of transactions. As a result, transaction fees increased to such an extent that most people found them too expensive to use the blockchain.
Scalability versus Security
Ethereum’s main focus has always been directed towards decentralization and security while scalability has been relatively left on the back burner. To encourage miners to prioritize transactions, users have to pay higher fees. Similarly, decentralization and security take precedence over scalability in the Bitcoin blockchain.
As it stands, the absolute majority of blockchains experience severe scalability issues. Ethereum, for its part, is trying to implement different solutions to eradicate the problem, which has hindered its further adoption, such as rollups and combinations of sharding mechanisms. Currently, its global community of contributors is feverishly working to find the right balance between all available options in order to solve the trilemma.
Where does Everscale currently find itself?
Everscale’s consensus mechanism consists of two Byzantine Fault Tolerance (BFT) validation sessions. Namely, one master session and one session for each shard. Shard sessions ensure the collation of new candidate blocks in each shard, validate blocks, and sign new heights of shard chains. The master session, in turn, performs only basic checks of blocks received from the shard chains, without rechecking the blocks themselves. This way, the network achieves horizontal scaling. As data loads grow, the data is partitioned and distributed among shards. Master validators, on their end, ensure the coordination of shards among themselves.
The scheme below depicts how both sessions work.
The principle of work described above, in spite of its high efficiency, carries a small vulnerability. That is, it presupposes that master session validators trust the decision of the shard session validators by default. Normally, when configuring the network, maximum parallelization of processing is preferred. Thus, the number of validators in shard sessions is usually minimized to increase the number of validator sets working in shards. Correspondingly, the number of shards also increases.
In the current default configuration, each shard session contains eleven validators. An increase in the number of validators per shard leads either to a mandatory increase in the total number of validators, or to an increase in the number of shards processed by the same validator. However, in this circumstance, this would lead to an increase in the average validation time due to the load on each individual validator increasing as well.
TIP
On Everscale, one validator can process several sessions in parallel.
Considering that ⅔ of shard validators is sufficient for validation, which is eleven by default, eight validators are enough to add a malicious block to the masterchain. This graph depicts a hypothetical example calculating the probability of a malicious block being sent to the masterchain. The assumption is based on the default network parameters: 300 validators in total, 30 of which are malicious. In these circumstances, there is a 50% probability that a malicious block will be added to the chain in the first eight months and a 90% probability that one will be added in the first two years of network operations.
The deterministic selection of validators in sets allows attackers to predict the time of a successful assault. Precisely, when ⅔ of the shard’s validators are controllable.
To eradicate this issue, the following is required:
- providing an additional layer of security for validating shard blocks
- making the selection of additional validators unpredictable
- enabling the possibility for retroactive validation verification
- setting up a mechanism to slash validators that do not perform additional validations
- barring the increase of blocking release latency except for malicious blocks
SMFT — solving the trilemma
The Soft Majority Fault Tolerance is an original Everscale consensus protocol designed to increase blockchain security to the level where any attempted malicious attacks become meaningless. At the same time, it ensures that the blockchain operates at a high speed while retaining sufficient decentralization. Currently, the mechanism is in the works and includes the random rechecking of candidate blocks by any of the workchain validators. Such validators are called verifiers. The need for verification is calculated on the basis of the signature from the hash of the candidate block with the private key of the validator using a BLS deterministic signature. If block validation issues should arise, the verifier notifies all masterchain validators that the corresponding shard block contains an error and must be rechecked before being included in the masterchain.
The scheme depicts how it will work.
The protocol introduces the following changes:
- All validators in a workchain are notified about a new block-candidate in each shard.
- An additional pair of BLS keys is introduced. The choice of BLS keys was made in connection with the possibility of obtaining multi-signatures from several validators on their basis without a significant increase in their size.
- Any validator in the workchain can be selected by a verifier based on the hash of the block and the private BLS key of the validator.
- A block is considered delivered within the workchain if the number of validators that confirmed the receipt of the block is higher than some configurable value (50% by default). The proof of delivery is the BLS multi-signature of the workchain validators.
- Shard blocks not delivered within the workchain are not accepted into the masterchain until the proof of delivery is provided.
- The acceptance by the masterchain validator of a NACK message from the workchain validator will result in the block carrying that message not getting added to the masterchain. This action, in turn, is followed by a broad consensus on this block.
To sum up, Everscale’s SMFT consensus is an original solution to the so-called blockchain trilemma. Instead of adopting and trying to improve upon an already existing mechanism, with the introduction of verifiers, Everscale developed a new one from scratch. SMFT will allow for parallel processing of smart contracts without affecting the number of validators checking the correctness of collations.