Limitations of scaling blockchains and which VM's are theoretically the fastest

Intermediate2/25/2025, 6:37:30 AM
We are seeing a shift towards single powerful servers; Solana, Megaeth, and the wide array of single sequencers all lean into one thing: single high throughput, high memory server (of these, the non L2 will always be practically fastest).

Forward the Original Title‘Limitations of scaling blockchains and which VM’s are theoretically the fastest’

TL;DR

We are seeing a shift towards single powerful servers; Solana, Megaeth, and the wide array of single sequencers all lean into one thing: single high throughput, high memory server (of these, the non L2 will always be practically fastest).

Was recently chatting with another founder whom I greatly respect, he mentioned I should write up our conversation.

It started with a simple question; “Does Sonic parallelize transaction execution in any way?”. The answer, is no. And this at first might seem like a strange choice, since for the last 2 years, if you have been reading up on VM tech, you would have seen parallelize pretty much everywhere. So why aren’t we?

To answer that, we first need to look at how Sonic engineering evaluates what we should work on, we had a ton of theories, that on paper sounds practical, that we wanted to implement, but limited physical team resources, so how do we choose which one is the most impactful? So instead of working on ANY of those ideas, the team decided to spend a year to build Aida, Aida is an incredibly powerful tool that allows us to replay entire blockchains (any) in minutes instead of months with useful performance metrics baked in. This mean that we could prototype, test in Aida, and very quickly know which theories hold up and which don’t.

Aida also allows us some pretty powerful profiling, which leads to outputs such as;

So with the above in place, we could very quickly and accurately test our throughput assumptions, so we set out to compare purely in memory VM vs disk, parallel execution, RDMS vs KV vs flat file, supersets, new consensus models, and more

The single biggest improvement, was DB, an 800% increase, next supersets, followed by consensus, and very low on that list, with a modest 30% improvement, was parallel execution. This seems counterintuitive, since a mental model for something like parallel execution seems intuitively better than the results. So how did we parallelize? Maybe we made a mistake, the test was “Clairvoyance” the absolute perfect form of ordering, an engine that knows the optimal sorting and parallelization ahead of execution (something in practice which is already impossible, so even the 30% is higher than it should be).


VM’s and blockchains are very complex components, and often, we measure for the wrong metrics (or we don’t measure at all).

Then he asked me “Where does the speed of Solana come from then? Or, it’s not practically higher than Sonic?”. The answer “Sonic is faster than Solana, but Sonic is not faster than the fastest Solana can be.”.

We are seeing a shift towards single powerful servers; Solana, Megaeth, and the wide array of single sequencers all lean into one thing: single high throughput, high memory server (of these, the non L2 will always be practically fastest). This solution, if properly optimized, will always be faster than multiple participants. So the max optimized throughput of something like Solana or Megaeth would be higher than their next fastest competitor that does 2+ servers consensus.

So then the next question is probably, why doesn’t Sonic do single leader elected servers then? And the answer here, that’s not what we are optimizing for. One of our north starts that I wrote about back in 2018, was that as we see the advent of intercommunicating programs, at some point, consensus is required. Assume a busy intersection with no stop signs or traffic lights and hundreds of car traffic. The most optimized method is for the cars to “register” themselves at the intersection and then agree on a sorting order and the most optimized method in which each car should move to maximize throughput. You cannot use a leader based system here, and you can’t assume a party isn’t malicious, in this case, Sonic consensus is optimized to the point, where it can already validate on a raspberry pi today without losing any throughput, so all the cars can agree on Sonic consensus based ordering. Sonic is optimized towards mesh networks.

Anyway, random ramblings, hope it helped somehow.

Disclaimer:

  1. This article is reprinted from [Andre Cronje]. All copyrights belong to the original author [Andre Cronje]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. The Gate Learn team does translations of the article into other languages. Copying, distributing, or plagiarizing the translated articles is prohibited unless mentioned.

Share

Content

Limitations of scaling blockchains and which VM's are theoretically the fastest

Intermediate2/25/2025, 6:37:30 AM
We are seeing a shift towards single powerful servers; Solana, Megaeth, and the wide array of single sequencers all lean into one thing: single high throughput, high memory server (of these, the non L2 will always be practically fastest).

Forward the Original Title‘Limitations of scaling blockchains and which VM’s are theoretically the fastest’

TL;DR

We are seeing a shift towards single powerful servers; Solana, Megaeth, and the wide array of single sequencers all lean into one thing: single high throughput, high memory server (of these, the non L2 will always be practically fastest).

Was recently chatting with another founder whom I greatly respect, he mentioned I should write up our conversation.

It started with a simple question; “Does Sonic parallelize transaction execution in any way?”. The answer, is no. And this at first might seem like a strange choice, since for the last 2 years, if you have been reading up on VM tech, you would have seen parallelize pretty much everywhere. So why aren’t we?

To answer that, we first need to look at how Sonic engineering evaluates what we should work on, we had a ton of theories, that on paper sounds practical, that we wanted to implement, but limited physical team resources, so how do we choose which one is the most impactful? So instead of working on ANY of those ideas, the team decided to spend a year to build Aida, Aida is an incredibly powerful tool that allows us to replay entire blockchains (any) in minutes instead of months with useful performance metrics baked in. This mean that we could prototype, test in Aida, and very quickly know which theories hold up and which don’t.

Aida also allows us some pretty powerful profiling, which leads to outputs such as;

So with the above in place, we could very quickly and accurately test our throughput assumptions, so we set out to compare purely in memory VM vs disk, parallel execution, RDMS vs KV vs flat file, supersets, new consensus models, and more

The single biggest improvement, was DB, an 800% increase, next supersets, followed by consensus, and very low on that list, with a modest 30% improvement, was parallel execution. This seems counterintuitive, since a mental model for something like parallel execution seems intuitively better than the results. So how did we parallelize? Maybe we made a mistake, the test was “Clairvoyance” the absolute perfect form of ordering, an engine that knows the optimal sorting and parallelization ahead of execution (something in practice which is already impossible, so even the 30% is higher than it should be).


VM’s and blockchains are very complex components, and often, we measure for the wrong metrics (or we don’t measure at all).

Then he asked me “Where does the speed of Solana come from then? Or, it’s not practically higher than Sonic?”. The answer “Sonic is faster than Solana, but Sonic is not faster than the fastest Solana can be.”.

We are seeing a shift towards single powerful servers; Solana, Megaeth, and the wide array of single sequencers all lean into one thing: single high throughput, high memory server (of these, the non L2 will always be practically fastest). This solution, if properly optimized, will always be faster than multiple participants. So the max optimized throughput of something like Solana or Megaeth would be higher than their next fastest competitor that does 2+ servers consensus.

So then the next question is probably, why doesn’t Sonic do single leader elected servers then? And the answer here, that’s not what we are optimizing for. One of our north starts that I wrote about back in 2018, was that as we see the advent of intercommunicating programs, at some point, consensus is required. Assume a busy intersection with no stop signs or traffic lights and hundreds of car traffic. The most optimized method is for the cars to “register” themselves at the intersection and then agree on a sorting order and the most optimized method in which each car should move to maximize throughput. You cannot use a leader based system here, and you can’t assume a party isn’t malicious, in this case, Sonic consensus is optimized to the point, where it can already validate on a raspberry pi today without losing any throughput, so all the cars can agree on Sonic consensus based ordering. Sonic is optimized towards mesh networks.

Anyway, random ramblings, hope it helped somehow.

Disclaimer:

  1. This article is reprinted from [Andre Cronje]. All copyrights belong to the original author [Andre Cronje]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. The Gate Learn team does translations of the article into other languages. Copying, distributing, or plagiarizing the translated articles is prohibited unless mentioned.
Start Now
Sign up and get a
$100
Voucher!