🔭 The Ethereum Scaling Roadmap
Here’s your roundup of the roadmap Ethereum is on if you don’t want to watch all two hours of: https://www.youtube.com/watch?v=b1m_PTVxD-s&t=7s. However it’s a great video by the guys at Bankless so well worth a watch!
It’s first important to note that Ethereum’s scaling journey began pretty much from the moment the genesis block was created, it wasn’t a sudden pivot or moment of divine inspiration. However, the journey has taken much longer than many in the community, and even many founding members of Ethereum, would have wanted. This is why you’ll see many of the co-founders working on projects which bear striking similarities to Ethereum’s direction; Gavin Wood with Polkadot, Joe Lubin with ConsenSys, Charlies Hoskins with Cardano.
Looking forward to a world of Ethereum being much more scalable and widely adopted, there’s 5 categories of improvements: Merge, Surge, Verge, Purge and Splurge.
Let’s dig into what’s due to happen at each stage….
This is where we’re heading to at the moment, with the transition from a proof of work consensus model to a proof of stake consensus model.
The main benefits of this are reducing block latency to a fixed window of 12 second vs a variable ~13 second window, reducing energy consumption by ~95%, replacing the activity of mining for block reward to staking for block rewards (and more importantly for the ability to validate and process transactions).
Note that the cost of a transaction (the gas) is not expected to be materially impacted with this move. This will come at a later stage.
Ahead of the green button to change the consensus model is robust testing on testnets — practice networks for the blockchain. So far there’s been very promising progress on this with the merge having already happened on the Kintsugi testnet back in December (although it’s worth noting there was a chain split bug found, however that’s the point of testing on a testnet!) and then on the Kiln testnet launching more recently in March 2022. Once things are looking good on these testnets then there will be a merge hard fork (expected in Q2 2022) which will officially move the Ethereum blockchain to proof of stake.
At this point there will be no more miners and those in the space who lament NFTs and DeFi on Ethereum as the death knell for the planet will need to re-think their argument. However one very important point to draw out here is that whilst PoS will be officially established on the network, validators who initially locked up their 32ETH to be part of validating and processing transaction will not be able to redeem their stake or any rewards they have earned since the consensus model change. This will instead happen with another hard fork, the post-merge hard fork, which Vitalik expects will be ~6 months after the initial merge activation. It will therefore be at this point that the $36billion worth of Ether in the ETH2.0 contract will be able to be released back into the ecosystem, as well as any earned staking rewards during the period between the merge hard fork and the post merge hard fork. [This is not financial advice but this will be a notable liquidity dump into the market!]
The next milestone on the scaling journey is called the Surge and this is a scalability boost to rollups through sharding. To understand what this means we must first think about the architecture of Ethereum today (and post merge). There is/will be a chain of blocks being committed to the Beacon Chain — the central spine of Ethereum where all layer one activity happens. Complementary to this are various layer 2 scaling solutions such as sidechains, rollups, state channels etc
🚗 🛣️ Here’s an analogy: You have a town, which we’ll call B, and it has a single road through it. This means there are often traffic jams in the town since there’s more lots of vehicle trying to get from town A, to town C, and they need to pass through town B to get there. Since we can’t make the road through the town bigger without changing the layout of the town e.g demolishing some shops, homes, parks etc we’re limited in how many cars can get from A to C. What we can therefore do is build some bypasses around the town to help people get from A to C. This gives additional capacity to the A to C route and we can have some connecting road and signs towards town B and the original road from A to C through B. This OG road is our Beacon chain on Ethereum and the bypasses are our layer 2 solutions.
It’s worth noting that scaling through rollups has long been a vision of Vitalik and the wider Ethereum community. A good post to read on this is: https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698
One important thing to note about rollups, a specific type of bypass, is that the execution and storage is off chain (the cars drive down our bypass and can park along the bypass) but the data stays on chain (a summary of all the number plates for cars along the bypass is stored on the OG road). So this gives us more capacity but means we’re still relying on the OG road to store some information. Therefore the more rollups we have, the more information coming onto our Beacon Chain. Enter the idea of sharding!
The general idea of sharding (and you can read Vitalik’s thoughts on it: https://vitalik.ca/general/2021/05/23/scaling.html) is that you can split data into multiple pieces and in such a way that someone doesn’t need to store all the shards to be able to verify and compute something related. It’s therefore a highly scalable way to store data.
Initially Ethereum was planning to put everything in shards; so execution, accounts, smart contract etc, however the plan has since developed into putting only the on chain data (the summary of the number plates) from the rollups (bypasses) into shards. This makes the sharding approach much simpler and means users of the network can put data into shards on layer one to ensure availability and inherit the security of the layer one, but then the execution of this data doesn’t happen on the main chain, instead it’s interpreted by the layer 2 solution. So compressed, highly scalable data on chain but the execution done in a highly scalable way off chain. A benefit of this approach is that because only the data is stored in the shard, not the execution, there’s no need for fraud proofs! This is because there’s no such this as an invalid block, only an unavailable block.
So how will these shards roll out…
- Step one is potentially: https://eips.ethereum.org/EIPS/eip-4844 (blob transactions to make rollups cheaper by expanding how much data can be stored on chain)
- Then the introduction of basic sharding
- Next step would be the full sharding spec with an increase of how much data can be within the shard (n.b this might still require users to store all shards rather than only some shards)
- Finally it’s a many shard world where nodes only need to process a single shard and the beacon chain (this is called quadratic sharding).
- The bonus is then with the addition of data availability checking (think along the lines of Arweave and Filecoin’s data checking methods) to ensure shards are storing data permanently and properly.
It’s worth noting that Ethereum’s initial ETH2.0 plan was for 64 shards and whilst other plans have been suggested for sharding levels above quadratic sharding, Vitalik has noted risks around this potential expansion :https://vitalik.ca/general/2021/05/23/scaling.html.
One of the standout benefits from sharding is going to be the big decrease for transaction fees on the layer 2 solutions since there’s going to be ever more bypasses to drive down. However it’ important to note that the OG road hasn’t changed, so fees are likely to stay high, with Vitalik even predicting they might rise! However this shouldn’t cause any concern since this is all part of Ethereum’s journey to be a layer-2 first protocol with the vast majority of user activity happening on the cheaper layer 2s.
From a block latency perspective we might also see some increases though with a predicted move from the static 12second intervals post the merge towards perhaps 16second blocks. However with a move of ~15 TPS to a potential 100,000 TPS, block time increases shouldn’t be too much of a concern. If you want to see a neat visualisation of TPS across Ethereum’s layer one and associated layer 2s then I’d recommend: https://ethtps.info/
The next stage is the introduction of verkle trees 🌳 — a replacement for patricia merkle trees which currently store Ethereum state (all states, contracts, balances etc) so that stateless clients can be introduced. To simplify the thinking as it’s a little mathsy; merkle trees allows for the compressed storage of lots of information by essentially ‘multiplying’ [but more correctly hashing] pairs of information together to create a final value. As such it’s very easy, given just the merkle root (this top summary value) to check if some information is within the tree.
With the move to verkle trees it’s a change from hash based commitments to vector based commitments and will provide the benefit that you don’t need to access the full state (which is increasing and requires you to have all blocks [or block n-1] so high sync time) to verify a block n. Instead you use the verkle proof which gives you only the pieces of the state which are relevant to the block, plus a proof that these pieces are correct. So you can verify the block with only the verke proof e.g easier and faster. This should help attract more nodes to the network since the requirements for doing so is much cheaper — you don’t need to run the entire state, you can instead just run a stateless client and use verkle proofs to verify a block. More nodes = greater decentralization! It’s worth noting that technically you can make proofs with particia merkle trees but they can be huge, so they’re a little too large to easily verify, whereas verkle trees are succinct so the proof are much shorter and therefore quicker to verify.
This is basically removing a bunch of historical data from the Beacon Chain, and then associated shards, in order to help prune the total network size. Right now when you sync a node you have to download all historical blocks, which is hundreds of GBs and post sharding will be even more! So the plan is to move the burden of storing the old data to another location and therefore decrease the sync up time and storage requirement for nodes.
The possible EIP on this is: https://eips.ethereum.org/EIPS/eip-4444
The main contenders for where the historical data might be stored are centralized places like block explorers who need the full history for their services, decentralised locations like Portal Network (which is the Ethereum Foundation’s solution) where nodes will store portions of the historical data. However it’s also likely that 3rd party protocols such as The Graph, Arweave, IPFS, Filecoin might also look to store historical Ethereum data.
The first hard fork on this will be to remove the requirement for nodes to store data which is older than one year. This is the history expiry stage. The second fork will be pruning the state data so that accounts which haven’t been changed in over a year are moved to the expiry tree. The expiry tree isn’t stored by the Ethereum blockchain, since it’s in the history, and so needs to be accessed by providing a verkle proof to historical data store. This can then be pulled into the current state.
A bucket of other cool things Ethereum will look to do!
One of the potential items is PBS (proposer/builder separation) where the validator who builds the block doesn’t have to be the same validator who validates the block. Therefore allowing more nodes to enter the network and contribute towards validation.
Another aspect of the Splurge is looking at tidying up technical debt to improve how easy it is to run a node, debug, add code and implement changes across the network.
So when is all this happening?
Well, it’s important to note that this isn’t a sequential roadmap of categories but instead parts of these categories will happen at the same time. The image below splits out the types of changes but should be read right to left with hard forks noting in blue and I’ve added red lines for different epoch’s of changes.
Vitalik Buterin Twitter: https://pbs.twimg.com/media/FFm9X58WQAgwkKI.jpg