Everpoint 2024: Highlights from the Core Developers' Panel Discussion

Everpoint 2024: Highlights from the Core Developers' Panel Discussion

Welcome to our detailed discussion on the latest developments and future plans for the Everscale node. Joining us today are key members from our team:

- Aleksandr Alekseev: Technical Coordinator, Broxus

- Vyacheslav Belenko: Lead Developer, EverX node

- Kirill Mikheev: Lead Developer, Broxus

- Stepan Kamentsev: CTO, Broxus

We will delve into our achievements over the past year, the main challenges we faced, and our plans moving forward.

Handling Queue Overload from a Smart Contract Attack

Vyacheslav Belenko:

A year ago our network experienced an attack, which might have been the result of testing rather than intentional harm. The network was briefly down, and we had to urgently repair it. This attack was made possible due to certain characteristics of the network architecture.

The attack was based on a smart contract that responded to each incoming message by sending several messages back to itself. This led to a constant circulation of a large volume of messages in the network, increasing its load. Furthermore, in such conditions, the operations for assembling blocks were not charged for, like gas fees. As a result, this load was essentially cost-free for its creators, providing an avenue for potential attacks.

We spent a considerable amount of time looking for ways to resolve the attack, experimenting with various approaches. The main challenges involved managing a large message queue, which was difficult to process and clean. Additionally, the shardable structure of our network, which requires splitting under heavy load, added complexity to the block assembly process. As a result, many of the tasks from our previous year's roadmap were postponed due to more urgent issues arising. 

Kirill Zavarovsky:

Vyacheslav mentioned that there were complications during the shard splitting process, related not only to technical aspects but also to the blockchain protocol itself. 

During the split, it's necessary to divide the message queue within the shard, which requires going through all the messages to distribute them into different groups. This is a lengthy process, particularly with a large number of messages, and the protocol initially did not provide an efficient mechanism for such division. Consequently, the collator has to incorporate this tough operation into the block creation.

When a contract sends messages to itself, it's clear that the queue grows exponentially, and no matter how efficient the splitting algorithm is, it can't fit into a single block. So initially, we implemented some quick fixes. That is, there was some kind of pre-calculation. It helped. Later, after careful consideration, we decided to make changes to the protocol. 

Now, during a split, queues aren't divided within a single block; instead, two shards receive the same queue. This queue contains both internal and external messages. During collation, the shard clears the queue by excluding delivered messages. The queue-splitting operation can now span multiple blocks, allowing the collator to take as much time as needed and defer the remaining work to the next block. This change fundamentally solved the attack issue.

Node Optimization: Lessons from the Venom Testnet

Kirill Zavarovsky:

Now we will talk about the Node improvements over the past year, especially related to the launch of the Venom network. 

The load in the Venom testnet was significantly different from Everscale, which exposed and helped to resolve many issues.

Aleksandr Alekseev:

Under the high load, we encountered two main bottlenecks with our node. 

The first is dealing with the database and disk. For the node to work quickly and efficiently, we need effective caching, synchronizing data with the disk and main storage. We made countless optimizations to the processes of what data we load, what we cache, and how we cache it.

Kirill Zavarovsky:

We also had a bottleneck with persistent state. Persistent state is the blockchain state used when introducing new nodes to the network. 

When a node joins the network, it needs to obtain the current state to synchronize. This state must be formatted in a way that can be transmitted over the network. 

When the state is stored as a tree of cells, it is efficient for operations but not for compiling into a single file for transmission. With large states, this becomes a problem as it requires traversing the entire database, organizing the data, and using specific algorithms. 

The old algorithm was universal but slow, especially with large data volumes, and did not account for the specifics of RocksDB. We redesigned the state saving and reading algorithms, adapting them to RocksDB, which significantly improved speed.

Aleksandr Alekseev:

In the live network, this issue was evident when new validators and nodes tried to join; they couldn't download the state before it updated. Validating nodes spent a lot of resources and time creating this state. 

We introduced optimizations that allowed the task to be divided: not every node needs to compile the state. Nodes synchronized with the network, not just validating nodes, can handle this task. These nodes can be sources of the persistent state because security is guaranteed — the node downloading the state still verifies its integrity and signatures.

Aleksandr Alekseev:

In fact, besides the blockchain infrastructure, which involves the operation and validation of nodes, we have a large infrastructure related to how users interact with the blockchain. This includes explorers, endpoints for blockchain interaction, and various transaction indexers.

Stepan Kamentsev:

Launching the Venom testnet brought us many challenges. To put it in perspective: when we launched the Venom testnet, the traffic on the RPC (the software that wallets connect to for blockchain interaction) was about 50 terabytes per day. 

We were receiving around a billion requests per day, which is about 10-20 thousand requests per second. In Everscale, we never had such traffic volumes. 

So, we had to rewrite everything from scratch. One of the technologies added to the RPC to reduce traffic was support for Protocol Buffers. This allowed us to significantly optimize the traffic transmitted from the RPC server to the client.

We maintained this traffic for six months while the testnet was running, achieving stable software performance. Regarding the Explorer and indexers, initially, the Explorer could lag behind the network by tens of minutes, which was unacceptable for users. This happened because all indexers also function as nodes, like validators that produce blocks.

One major issue, as mentioned by colleagues, was related to state management and garbage collection. Each block generates data in the state that needs to be deleted after processing. The garbage collector required more time than issuing the block, causing nodes to spend most of their time on garbage collection instead of generating new data.

Creating an efficient garbage collector that doesn't impact the system's primary calculations was a complex task. We spent considerable time ensuring the garbage collector worked effectively, which reduced hardware requirements for nodes.

There were also challenges with the Explorer and data indexing. In Everscale, data archives were around 1-2 megabytes, downloaded by nodes during synchronization. In Venom, archives grew to 200-300 megabytes. Many software components weren't designed to handle such large data volumes, necessitating further adjustments to achieve stable operation.

Before optimizations, the network's user action response time was around 100 seconds, sometimes even reaching 1000 seconds in rare cases. This was unacceptable for users. 

Through various optimizations, we reduced the finality time in the testnet and mainnet to 3 seconds, even under a load of 1000-2000 TPS on 8 shards. In the mainnet, we observed stable operation under a load of 300-500 TPS, showing the infrastructure's readiness for large data volumes.

There were also some interesting problems, such as developers incorrectly designing contracts with large mappings, incompatible with our blockchain's architecture. Some contracts grew to 50 MB, overwhelming the RPC and causing significant issues. This was further exacerbated by the fact that these large mappings were set up on fronts that continually updated the state, causing the RPC to provide unnecessary data, which broke many things.

At one point, our 10-gigabit channels were maxed out, so we implemented a solution: the RPC now doesn't return contract states larger than 500 KB. This is a workaround, but similar protocols, like TON, have patches to limit contract state sizes at the protocol level, which we might adopt. This can be configured.

Kirill Zavarovsky: 

Technically, it's a reasonable solution.

Stepan Kamentsev:

Unlike classic virtual machines like Ethereum, where nodes handle contract requests without transmitting contract states to clients, our network downloads the entire contract state to the user's local machine. Our design allows users to locally verify transactions against a valid state, checking validator signatures and Merkle tree proofs. 

In other blockchains, state remains on remote nodes, and complex ZK-math for local verification would be impractical. For instance, an ERC-20 USDT contract with a billion token holders would require downloading gigabytes of data for each transaction, which is infeasible. Therefore, users typically trust the system as is, but this could expose them to invalid state data without their knowledge, potentially leading to attacks. 

Our concept is solid, but we need to consider some specifics, like not putting too much data into a single contract. It would be beneficial to enforce such limitations at the protocol level to prevent developers from making these mistakes.

There have been cases where developers didn't anticipate large user volumes, leading to unexpected problems. Now, developers quickly realize if their contract design is flawed because the RPC stops returning data if the contract state is too large. 

If they still want to proceed, they can run their own local RPC and bear the traffic costs themselves. Traffic isn't free; it can be ten times more expensive than memory or CPU resources. For blockchain, large contract states aren't ideal. 

Kirill Zavarovsky:  

It's not just the volume of the contract state but the number of cells that matter. It's easier for the infrastructure to handle one large cell than 100 smaller ones of the same total size. We’re working on a concept called "big cells," allowing the blockchain to store larger, logically single data blocks, like NFTs, more efficiently.

Aleksandr Alekseev:

I'll explain why we keep mentioning the Venom testnet. As developers, when designing systems, we plan for current loads and anticipate growth by 10 or 100 times. If the load increases in the mainnet, we gradually upgrade the system to handle it. However, with Venom testnet, we experienced a huge surge in load because it was a free blockchain; users didn't pay for gas, leading to a sudden and significant increase in load. 

This required us to optimize and refine many areas, changing algorithms and improving the system. This experience prepared Everscale to handle huge loads, giving us a good reserve for future growth. It allowed us to focus on the right things.

When these situations arose, we had to reprioritize our roadmap and address the immediate issues. This was an invaluable stress test for our network.

Stepan Kamentsev: It was great to have this real-life test, not synthetic ones. Real users, given funds to use the blockchain as they wished, provided us with valuable experience. This experience was essential for further Everscale development.

Aleksandr Alekseev:

After optimizing the node, we returned to our roadmap and implemented REMP in late December to early January. Vyacheslav, can you explain what REMP is?’

REMP

Vyacheslav Belenko:

Yes, of course. REMP stands for Reliable External Message Protocol. Its goal is to improve the native delivery mechanism for external messages in TVM networks. 

The original idea was to deliver external messages via broadcasts. Broadcasts allow a node to relay data to randomly chosen neighboring nodes, making it difficult to track the delivery speed and coverage.

In Everscale, this method works for many types of data, but for external messages, which are crucial for user-network interaction, it causes delays. Thus, we developed REMP to ensure timely delivery and feedback.

The REMP system includes a client on the user's side and REMP servers, which are the current network validators. REMP delivers the message directly to the validator, bypassing many intermediate steps. Validators ensure consensus on message delivery and allow the traceability of message status.

A key feature of REMP is deduplication, preventing multiple processing of the same message, which could lead to double or triple spending. This simplifies the replay protection mechanism in smart contracts, reducing their code and gas costs.

REMP is built on the same catchain protocol as the BFT consensus, posing additional challenges due to its network and computational overhead. Nevertheless, we developed a stable solution now active in both the mainnet and devnet.

Previously, users faced issues, where messages sent to the blockchain were not executed, prompting developers to resend lost messages manually.

The technology already works and can be improved in performance, reliability, and additional features. 

Since REMP is crucial, there's always room for enhancement.

SMFT Protocol

Aleksandr Alekseev: Let's move on to SMFT. Vyacheslav, please explain what it is, its purpose, and its current status.

Vyacheslav Belenko: 

SMFT is an alternative consensus algorithm. Consensus means validators in the network must agree on the validity of a block. BFT consensus requires a significant data exchange between validators through a multi-step process. While reliable, it slows down finality because shardchain changes must be reflected in the masterchain.

SMFT simplifies this process, introducing verifiers. In the original TON white paper, a similar concept called "fisherman" was mentioned. 

Verifiers are designated roles, chosen pseudo-randomly based on the block hash generated by the collator. A node can determine if it needs to verify a block and must report its status if so. Nodes that fail to report can face slashing, although this feature is still under development. A block is considered valid if 51% of verifiers give a positive verdict. 

SMFT currently operates in a "shadow" mode, running parallel verifications without affecting block validation. We've tested it on various networks, including the mainnet, achieving shard block finality in under 100 milliseconds, though we aim to improve this further.

Currently, we have the old, reliable BFT consensus running alongside SMFT. SMFT in shadow mode only collects metrics without affecting block creation. 

This is crucial for understanding real-world performance across different geographic locations, allowing us to make informed decisions based on real-life data rather than test environments with fewer machines.

Kirill Zavarovsky: 

Let me delve into the cryptography involved. 

In the current consensus, a new block is sent to a subset of validators who verify it. If two-thirds approve, the block is applied. Everyone knows who these validators are, which could tempt manipulation.

In SMFT, the block is sent to the entire subset of the workchain, and this dissemination is cryptographically confirmed. When a validator receives the block, they perform calculations to determine if they need to verify it, using a secret key. This means no one else knows which validator has to verify the block, preventing any tampering.

A question might arise: if only the validator knows they need to verify the block, why would they bother? After a certain epoch, these keys are revealed, allowing anyone to check that the validator did their job. This creates a powerful system where the collator knows that any validator could check the block and any issues will be known to everyone. 

Additionally, if, for example, the number of validators in a shard is limited to seven, the calculations for a block made by a collator in this shard can only be verified by the other six machines. Currently, masterchain validators do not check shard blocks. So, if intruders take over a shard, they can manipulate calculations without detection, as this isn't verified at the masterchain level. 

SMFT addresses this problem by ensuring wider verification.

Fast FInality and DAG Consensus 

Aleksandr Alekseev: 

We've spent a lot of time discussing our work this year, including node optimization, handling load, implementing REMP, and SMFT. 

These were planned last year or even earlier. However, the past year brought new challenges and a different understanding of user expectations. 

We began the year focused on achieving fast finality, which has a clear business objective. 

Stepan Kamentsev: 

The concept of fast finality involves finding a compromise between the number of shards and the finality of user actions. 

When a user performs actions like trading on a DEX, buying NFTs, or staking, these actions generate numerous transactions across different shards. 

Initially, inter-shard data exchange was only possible after the Master Chain Block was created, resulting in delays. For example, if a DEX transaction involves five contracts in different shards, the user could wait up to 30 seconds for a response.

This system works for small, simple transactions but struggles with more complex architectures brought over from other networks, like DEXs, bridges, NFTs, DeFi systems, and lending protocols. Users don't want to wait long for transaction confirmations, even if the network can handle high TPS (transactions per second). 

Realistically, businesses don't need excessively high TPS—10, 20, or 30 thousand transactions per second might be more than enough. What businesses really need is to ensure that users don't experience long waits. Users should receive responses as quickly as possible. In the initial architecture, this wasn't feasible. We uncovered these issues during the Venom launch and took a two-part approach.

We began developing and exploring other consensus mechanisms, particularly those using Directed Acyclic Graphs (DAGs). The essence of this consensus is that the consensus level is moved to the member pool. When a user sends a transaction, it is determined when the transaction should be executed. This shift helps achieve the business goal of fast finality.

Besides the mempool, there's another aspect: classical consensuses like BFT are two-stage processes. In the first stage, a block proposer suggests a block while the other validators do nothing, waiting for the proposal. Once proposed, the block is then validated by the other validators, which often takes longer than the block proposal itself. Validators can only start validation after the proposal.

By moving the formation of the user transaction queue or external messages to the mempool level and strictly guaranteeing their order, we gain a significant advantage. All nodes in the network can work simultaneously, removing one stage of the consensus. This means we take external messages and start their collation process on all nodes at once. 

During block creation, nodes don't communicate with each other. Since all nodes have the same behavior algorithm, virtual machine, and input data, they independently arrive at the same result. This removes the long validation stage, reducing it to exchanging hashes of the computed blocks or final states. Once a majority agrees (BFT majority or 51%), the block is considered valid.

We based our consensus on Facebook's approach used in Sui and Aptos, similar to modifications in Libra. We made it compatible with our state and virtual machine. We've stripped down the rest, kept the state and VM, ensured it works deterministically, and are now moving to test this technology. 

We plan to open a testnet in about a month based on this technology. It's an early prototype, and much work is needed for it to function correctly and safely. This is one of the alternative paths we've chosen as a backup if our other method for achieving fast finality doesn't work, which colleagues will discuss later.

Kirill will explain how the DAG works, as it helps synchronize all participants and ensures everyone has the same perspective.

Kirill Miheev: 

As Stepan mentioned, the task of the new experimental mempool is for one validator node to distribute an external message to all nodes. Each node does this, resulting in a common state and a consistent ordering of messages across nodes, enabling uniform block scaling.

Let's say we have a short time period, a round, in which each node must issue its mempool block containing external messages as the payload. Each mempool block is sent to all nodes. When all nodes finish sending, the round ends, and a new round begins. Each new block references two-thirds plus one majority of the previous round's blocks. This forms the beginning of a tree.

Each round adds to this tree. Although each node has a different root for its tree, the number of nodes in the mempool is limited, and they each issue one block per round, constraining the tree's width. At a certain depth, all nodes converge. 

The first part of the task is distributing external messages. Next is ordering them. A pseudo-random leader is chosen every few rounds, whose subtree is then ordered. This ordering is consistent across all nodes, creating layers of subtrees.

This process contrasts with traditional blockchain concepts where blocks form a historical chain. In classic blockchains, nodes generate shard blocks and verify consistency by exchanging signatures. In the mempool, a tree structure forms before collation, eventually converging into a common structure.

Stepan Kamentsev:

During development, we encountered many interesting moments. When we reviewed whitepapers like Narwhal, Bullshark, Mysticeti, etc. They described technologies like Reliable Broadcast, which prevents nodes from lying. In a DAG, a node must issue only one block for all. If it sends different blocks to different neighbors, its dishonesty might only be discovered after many rounds. However, when we implemented these protocols, we found that Reliable Broadcast wasn't present, and the process worked differently.

Kirill Miheev: 

In theory, DAGs rely on Reliable Broadcast, where a node sends a message to its neighbors, who exchange echoes to ensure they received the same message, and then forward it. This echo mechanism is extensive. 

Practically, a different approach is used. Nodes already communicate each round, so we don't need an echo mechanism but rather strict verification of received data. Blocks must carry signatures from neighboring nodes confirming their delivery.

The final stage is linearization, where the leader's history gives all nodes the same block set, limited by the number of rounds. We traverse the graph from the root in a breadth-first manner and then reverse it, resulting in a historically sorted set of messages.

Stepan Kamentsev:

Another point to mention: the DAG concept revolves around an algorithm that forms a running graph containing all externals. 

There's an algorithm that linearizes this non-linear graph into a sequence, so it can be processed as a queue and applied to the state in the virtual machine. Due to mathematical algorithms and fixes, this process happens identically for all participants. We form the graph together in parallel, but once it's formed, we stop communicating and simply apply user transactions to the state.

We had to address several issues to achieve deterministic block production and maximum speed. We replaced shards with streams, configuring the number of streams at the protocol level. This ensures that regardless of server power, the incoming queue is processed uniformly and in parallel, applying it to the state. 

Kirill Miheev: 

We also changed various components, such as replacing the network protocol with QUIC. Essentially, we rewrote the technology from scratch, incorporating all previous experience and developments.

It's worth noting that the high-level implementations of ADNL and RLD are very similar to QUIC, sharing the same basic ideas of working over UDP.

Stepan Kamentsev: 

Yes, we've started experimenting with this technology. Although it's a very early prototype, we've managed to saturate gigabit channels on nodes and estimate throughput at about 60,000 messages per second. 

This figure might decrease when we refine other stages, like proper storage, but it could also increase as we optimize further. These numbers are from a network state where a third of the nodes are dead and consensus is poor, yet it still performs at this level. Ideally, a node can issue around 100 rounds per second, each with approximately 1,000 or 2,000 externals.

One trade-off to note is that this consensus mechanism doesn't scale well with a large number of nodes. Since all nodes are interconnected and must exchange data, it can't efficiently support, for example, 1,000 validators. Tests conducted by SUI that achieved 400-500 thousand TPS didn't use a virtual machine and involved only about 20 validators. Adding more validators decreases network performance due to bandwidth limitations. Thus, there's a practical limit to the number of nodes this consensus can handle. This system can't handle a large number of validators. We aim for up to 100 validators for this system. More than that won't work efficiently. 

Currently, consensus involves rotation, where a validator session lasts, for example, 2 minutes with 13 nodes producing shard blocks. Then they rotate to another set of 13 nodes. The DAG consensus could be similarly time-limited. We have ideas for this, but it's a future task.

Kirill Miheev: 

There's always a trade-off between speed, throughput, and hardware requirements. Balancing this triangle will take additional time.

Aleksandr Alekseev: 

Regarding Fast Finality, our BFT consensus in shard chains can't provide the necessary finality speed. Therefore, we've been exploring how to modify our architecture to achieve the required speed. One approach is using techniques from SUI, adapted to our architecture.

Vyacheslav Belenko: 

Another option is applying the SMFT consensus, as I mentioned earlier. It's more lightweight and can provide significant advantages over BFT consensus at the shard level. However, achieving Fast Finality isn't solely about consensus. Kirill can elaborate further.

Kirill Zavarovsky: 

While our network, Everscale, can handle a very high number of transactions per second, these are independent transactions. When we deal with chains of transactions, the speed drastically drops, causing significant latency. 

Within the FastFinality framework, we've addressed this by allowing smart contracts in different shards to interact without waiting for a master block. This eliminates the possibility of forks, as any shard block is immediately propagated throughout the network. If there were any forks, they would grow quickly like a tree and cause the network to split into different directions. To ensure FastFinality, we must eliminate forks entirely, allowing the master chain to preemptively appoint a collator for block collation.

Kirill Zavarovsky: 

Validators are always different, chosen randomly from the entire network, meaning they are not required to check each block with one another. This asynchronous process allows for high speeds, both parallel and in chains, which is our goal.

Aleksandr Alekseev: 

Let's move on to the next question. The second challenge is MESH. Kirill, please start by explaining the business challenge.

MESH 

Stepan Kamentsev: 

The reason we started developing MESH was simple: we thought long and hard about how to connect networks. We considered separate workchains and rewriting the elector, but this still leaves a single master. If the master is intentionally or unintentionally disabled, it can cause major issues, potentially stopping multiple networks. Workchains are more about partially public networks, where some information is hidden and some is public, or for metadata stored in the master and shared with other components.

Imagine a case where we create a blockchain for a country, implementing a CBDC. Now, imagine two countries; one shouldn't control the master, as this creates a power imbalance and potential for intentional disruption. They need to be equal, with no single point of failure. That's why we created MESH, a technology that links two networks, allowing data exchange without one network's failure affecting the other. MESH can theoretically connect many blockchains, even without direct connections, transmitting data transitively through intermediary networks. This removes the single point of failure and allows each network to have its own economy, gas costs, and tokens.

The business goal was to ensure system resilience and decentralize control. MESH enables data transfer between networks and supports forming transactions or complex cross-chain calls in the future. That's how I envisioned it initially. Kirill, you're welcome to continue.

Kirill Zavarovsky: 

We aimed to find a balance. On one hand, a single master chain with various workchains is too tightly connected. On the other hand, Layer 2 solutions are too independent, slow, and cumbersome. The goal was to integrate an inter-network protocol directly into the base layer of the blockchain protocol. This avoids the tight coupling of multiple workchains.

The entire blockchain state is a tree of cells, also a Merkle tree, including the message queue. This allows cryptographic proof of the message queue without needing the entire state, using a Merkle proof.

The blockchain is designed so that with the Zero State and a sequence of key blocks from Zero State onward, you can cryptographically prove any part of the state at any time. Validators can create such proofs and prove to validators of another network that certain data exists in their state. For us, this data is messages for the other network.

We’ve been developing this for a few months. Currently, one network can prove data to another network within its state. We can register one network with another, stating that network B with a certain Zero State will interact and exchange messages with network A. Network A then performs a bootstrap procedure, downloading the sequence of key blocks, similar to what a new validator does when joining a network.

After bootstrapping, a node in network A knows the current set of validators in network B, including their public keys and certificates, allowing it to verify any block in network B. Then, it downloads the Merkle proof of the queues.

Network A, with all the validator certificates from network B, verifies the Merkle proof by checking signatures and the validity of the Merkle tree. This enables network A to see if there are any messages for it from network B at the protocol level.

Aleksandr Alekseev: 

We’re close to connecting two networks, though we haven’t done an integration test yet, but it’s nearly ready.

Kirill Zavarovsky: 

Yes, it’s important to note that once the networks are operational and exchanging messages, we’ve designed it to minimize data exchange between them. Unlike within a workchain where all nodes exchange all blocks, causing a large data volume, here, if two networks are connected but have no traffic between them, the data exchanged is minimal thanks to the Merkle proof mechanism. We can delve into more details later.

Aleksandr Alekseev: 

Thank you, Kirill. These are our two main challenges for the year: MESH and finality. We’re constantly working on them. 

Stepan Kamentsev: I need to step back a bit to explain why we're considering the Mesh Network. We already have a concept of messages sent from one contract to another, which is persistent. If the data isn't delivered immediately, it's stored in the blockchain state until the next collation cycle clears these messages. Given this existing framework, nothing prevents us from directing these messages to another network. New message types could be transmitted between networks, maintaining the same guarantees we discussed. Of course, mechanisms for communication must be devised since fundamentally, the two networks are separate and do not synchronize all data. We started this conversation because it was already part of our blockchain's base functionality

Stepan Kamentsev: 

If we had a blockchain like Ethereum, with a synchronous virtual machine that must execute transactions entirely or not at all, we couldn't achieve this. But in our system, each call to another contract is essentially a promise that it will eventually execute, though not necessarily immediately. This flexibility allows us to break the chain and record that a contract is on another network, storing this information in our state. The other network can then retrieve this data and execute it. If something goes wrong—like a network halt—upon reboot, it will sync from the last known point, download all necessary blocks and state, and process all pending interactions, ensuring reliable execution.

Kirill Zavarovsky: 

It’s important to note that these are different queues. A queue within one network and a queue to another network are not the same.

Read More