Can Everscale become the next Ethereum?

Can Everscale become the next Ethereum?

Everscale is a decentralized network with a decentralized community; there is no single entity that controls all operations — we are the Everscale community! One of our developers and community members carried out an in-depth analysis of Everscale, comparing it with Ethereum and explaining why Everscale is the Ethereum 2.0 that Ethereum should have become but never will be because of its legacy.

Ethereum and its legacy

Ethereum is the first blockchain of the second generation that brought the world smart contracts, decentralized apps, and eventually DeFi. Unfortunately, it became a hostage of its pioneering nature. It was revolutionary at the time of its emergence, but it has chosen a path of slow evolution and a stronghold of permanence that keeps it from evolving.

With Everscale, the complete opposite is true. Due to its late start, Everscale was able to assess what mistakes had been made in Ethereum and apply the right solutions in the architecture. At the same time, unlike many other blockchains, Everscale did not cut any corners in its design and did not try to make another cheap version of Ethereum.

In this article, we will break down some of the problems of Ethereum and how they have been solved in Everscale.

Sharding

In the very beginning, the initial goal for Ethereum 2.0 was to build real sharding with cross-shard transactions and hundreds of shards. Then, Ethereum’s researchers and developers decreased the number of shards to 32, and now they have decided to refuse real sharding (at least in the near future) and are considering using rollups instead.

Rollups are a very controversial solution not only because of their centralization and low security, which is a topic for a separate article, but they are also not user-friendly. In a nutshell, they are just different networks that get some security guarantees from an L1 blockchain.

It’s very difficult to implement real sharding on Ethereum because of its initial synchronous architecture. Synchronous cross-chain transactions are hard to implement, and it’s impossible to do it with high bandwidth. And if each shard exists independently, this isn’t much better than rollups.Meanwhile, Everscale was built initially with asynchronous architecture.

All smart contracts communicate with each other by sending messages, and all that needs to be done in order to make sharding work is to synchronize message queues between shards.

At the same time, as Everscale was built by maximalists, it goes even further. They didn’t just come up with sharding — they came up with infinite sharding.

Sharding — How it works in a nutshell

We have two master shards (also known as workchains). A master workchain (-1) for synchronization and governance, and a main workchain (0) for smart contracts.

The main workchain for smart contracts is divided into N shards, with a minimum of 16. Each shard has its own set of validators who are responsible for the execution of transactions on their shard, and at the same time they download all blocks from other shards on their workchain.

A block in Everscale is not just a list of transactions that need to be executed in order to roll out state changes; a block is:

  • A list of messages for transactions that have been executed, which have been removed from the incoming queue.
  • New messages that joined the outgoing queue after message processing.
  • Changes in smart contracts states that have been obtained after message processing.

For example, a validator from shard X needs to maintain the current state of shard Y. This validator doesn’t need to execute all transactions that were in the shard Y block; they just download the block and roll out the changes that took place in the message queue and the states of smart contracts.

This means that sharding in Everscale is not data sharding, but sharding of computation resources.

Also, if a shard has too many transactions in the last N blocks, it just splits into two. According to the address ranges of the smart contracts, some addresses will go to one shard, while others will go to another, and they can also be split into two.

To make this work, Everscale also decided to abandon the idea of radical decentralization. The number of validators in the network will only be measured in thousands, not hundreds of thousands like in Ethereum. These are professional validators, with high tech and expensive servers: now the requirements for validators are 48 CPUs, 128 GB RAM, and 1TB SSD + a 1 GB network.

With this kind of sharding, Everscale gets huge network bandwidth without degrading the experience of users, as they don’t need to switch shards themselves or constantly transfer their assets from one rollup to another like on Ethereum.

There’s also a big security issue here, because with more shards, there are naturally fewer eyes watching each one. With high block production rates, this can lead to malicious behavior of validators who collude with one shard and create an outgoing message that carries assets which didn’t exist in the created contract.

Concept of the new Everscale consensus

The main idea is that as Everscale’s validators only share computation with each other and everyone always has data for all shards, we can easily validate every released block independently.

Here we’ll just describe the general principles to make it easier to understand, but not the exact algorithm itself:

  • Each validator comes up with a random number and sends its hash to the rest of the validators.
  • After all shards have created a block, but before other shards accept it, all validators must take the hash of that block and mix it with a random number. If the remainder from dividing the resulting number by N is zero, then the validator must check that block and send “Yes” or “No” to all validators of the master workchain.
  • If there was at least one “No,” then the rest of the validators start checking that block. If the block is wrong, not only the validators who created the block will have their stakes slashed, but also those who said “Yes” or were silent.
  • Before creating the next block, everyone reveals the number they gave, and puts forward a new one.

Thus, shard validators never know which other validators will check their block, or how many of them. It’s a very nice and elegant solution.

An ever increasing data tail

The original idea of blockchain was to have a chain of blocks from zero to the current state. What’s more, we can always sync from genesis to the current state, and check that the state we have is right.

For some time already, Ethereum full nodes have been starting to sync from some snapshot from the not too distant past, and not from the genesis block.

Many Ethereum maximalists are still not ready to accept the fact that the idea of storing the entire block history from scratch was wrong. They believe that the block history will be stored forever, and come up with special protocols to make it always possible to check a particular part of the blockchain.

Nevertheless, we can say that even Ethereum has abandoned this idea. In the Ethereum 2.0 roadmap, there is a paragraph entitled “History expiry,” which states that full nodes should now officially not store block history for more than a year.

Block history is also very crucial for rollups. If a rollup operator is missing, you need its entire block history to release your assets on an L1, which is one of the reasons why rollups are a controversial technology. Starting with Ethereum 2.0, we can now only talk and guess about the possibility of that history being stored somewhere. Blockchain explorers are supposed to store the blockchain history, and some new systems are being thought of, but so far there are none.

It’s also understood that from the two points “High throughput” and “History retrievability” we can choose only one. (source)

There is also such a thing as log events. This was created to simplify writing Web3 applications, but also led to the fact that full nodes or blockchain providers (Infura) must store an infinitely increasing amount of data, and queries to it are very slow.

It is the same legacy problem. A lot of applications have been written already, and it would be hard for Infura to turn away from using it.

But this is only half of the problem. There is also the issue of blockchain state sprawl. If someone writes something into the blockchain at least once, for example buying 0.001 units of some memecoin, even if that memecoin is worth nothing in two years, the validators will be obliged to keep information about the purchase forever.

That is, you pay for the record once, but it will be stored forever. That’s where the interesting economics come in — blockchains are forced to limit the rate of recording data artificially, so that the speed at which the blockchain state grows does not exceed the rate at which data storage gets cheaper. As a result, users are forced to compete with each other for the right to write data on the blockchain, and transaction fees keep rising.

This problem has also been recognized in Ethereum, and the Ethereum 2.0 roadmap contains a “State expiry” point. But of course, they can’t completely solve this problem without breaking backward compatibility. So far, there is a proposal to remove contracts that have not been accessed for N years (10 years, for example) from the state, with the possibility of recovering them.

Some other blockchains also try to solve this problem. For example, on the Near blockchain a contract has to lock N tokens every time it writes new information into the state, and a smart contract developer can deploy a mechanism so that users can delete their data and get the tokens back. This is of course a palliative — not all smart contract developers will do that, and even those who would do it don’t think about a mechanism for returning tokens if the value of the record will change in the future. Other blockchains simply delete smart contracts with less than N tokens from the state.

In Everscale, developers don’t look for easy ways out and don’t cut corners, so they made sure this problem was solved as correctly as possible. In Everscale, each contract has to pay rent for storing its data in the state, and that rent is linear depending on data size. When a smart contract runs out of EVER tokens, it is deleted with the option to restore it, and then deleted completely.

Thanks to this, Everscale has completely controllable behavior and each smart contract decides how long it will live. Users don’t have to compete with each other for the opportunity to record their data and Everscale gets enormous throughput in terms of the number of transactions.

Distributed programming

There is a reasonable question: if we have a contract for some token, for example USDT, then users with large accounts are more motivated to pay for its storage, while users with small accounts are less motivated. Won’t we have a situation where some are obliged to pay for storage of other users’ data so their contract won’t be deleted?

To solve this problem, another ingenious idea was invented, called distributed programming.

In Everscale, for every entity, every account balance, and every trading pair it is the custom to deposit their own small smart contract. The owner of that contract decides how long they want to keep that data and only pays for themselves.

How this works is quite a complicated topic. You can read about it in another of our articles: https://mnill.github.io/everscale-for-solidity-dev/distributed_programming

Read More