The Blockchain Debate Podcast

Motion: Ethereum 1.0 can scale (Uri Klarman vs. Mrinal Manohar)

April 29, 2020 Richard Yan, Uri Klarman, Mrinal Manohar Episode 9
The Blockchain Debate Podcast
Motion: Ethereum 1.0 can scale (Uri Klarman vs. Mrinal Manohar)
Show Notes Transcript

Guests:

Uri Klarman (@uriklarman)
Mrinal Manohar (casperlabs.io)

Host:

Richard Yan (@gentso09)


The guests we have today have close ties to Ethereum. One is actively working on a solution to scale the 1.0 version. The other was an early investor in Ethereum.

In this episode, I learned about:

  • An interesting layer-0 approach to improve the performance of Ethereum 1.0 as well as any layer-1 protocol;
  • Disadvantages in layer-2 solutions with similar value proposition in scaling;
  • Main benefits of upgrading from Ethereum 1.0 to Ethereum 2.0, besides the change from POW to POS;
  • And other non-scaling goodies such as perspectives on M&A of public blockchains in a crowded space.


Be sure to also check out our previous episodes too, on Bitcoin’s store of value status, tokenization and smart contracts, DeFi, bitcoin halvening, China’s future in blockchain, POW vs POS, oligopoly vs multiplicity of public blockchains, and permissioned vs permissionless blockchain for enterprise usage.

If you would like to debate or want to nominate someone, please DM me at @blockdebate on Twitter.

Please note that nothing in our podcast should be construed as financial advice.

Source of select items discussed in the debate:

Richard:

Welcome to another episode of The Blockchain Debate Podcast, where consensus is optional, but proof of thought is required. I'm your host Richard Yan. Today's motion is:"Ethereum 1.0 can scale." The guests we have today have close ties to Ethereum. One is actively working on a solution to scale the 1.0 version, as well as other blockchains. The other was an early investor in Ethereum. In this episode I learned about: an interesting layer-0 approach to improve the performance of Ethereum 1.0 as well as any layer-one protocol; Disadvantages in layer-2 solutions with similar value propositions in scaling; Main benefits of upgrading from Ethereum 1.0 to 2.0 besides the change from POW to POS; And other non-scaling goodies such as perspectives on M&A of public blockchains in a crowded space. Be sure to also check out our previous episodes too, on Bitcoin's store of value status, tokenization and smart contracts, DeFi, Bitcoin halvening, China's future in blockchain, POW vs POS, oligopoly vs multiplicity of public blockchains, and permissioned versus permissionless blockchain for enterprise usage. If you would like to debate or want to nominate someone, please DM me@blockdebate on Twitter. Please note that nothing in our podcast should be construed as financial advice. I hope you'll enjoy listening to this debate. Here we go! Welcome to the debate. Consensus optional, proof of thought required. I'm your host, Richard Yan. Today's motion: Ethereum 1.0 can scale. Scalability refers to the ability of the network to sustain performance with large number of nodes. And it seems long settled that Ethereum 1.0 has this problem of scalability. This is one of the main reasons for the urgent call for its upgrade to 2.0. That said, I'm very curious to hear opposing views on this quote unquote conventional wisdom. To my metaphorical left is Uri Klarman, arguing for the motion. His position is that Ethereum 1.0 can scale. In fact, I suspect that he would argue that scalability is possible for many other blockchains as well. To my metaphorical right is Mrinal Manohar, arguing against the motion. His position is that Ethereum 1.0 cannot scale. Although this seems, again, to be somewhat of an established fact, I'm excited to hear how he defends it from Uri's arguments. Gentlemen, I'm excited to have you joined the show. Welcome!

Uri:

Thank you.

Mrinal:

Thanks Richard. Pleasure to be on.

Richard:

Here's a bio for the debaters. Uri is CEO and cofounder of Bloxroute, a layer-0 solution aiming to solve scalability bottleneck for all blockchains. In particular, the solution operates at the network layer and can be proven to treat all nodes fairly in propagating blocks. Uri is an interdisciplinary network researcher. His specialty includes alternative content distribution networks, trustless peer coordination and security.

:

Mrinal Manohar is a computer scientist and CEO& cofounder of CasperLabs, a new proof-of-stake public blockchain, based on the fully decentralized CBC Casper consensus algorithms. Mrinal was previously with Microsoft, the consulting firm Bain& company, and private equity firm Bain Capital. He also previously worked at a$1 billion hedge fund, being the sector head for the Tech, Media and Telecom sector. He began programming at age 11 and received his Masters from Carnegie Mellon University. He has been investing in the space since 2012 and was an early investor in Ethereum, Blockstack and several other protocols. As usual, the debate has three parts: an opening statement from both sides, starting with Uri. The second round is the body of the debate, with me directing questions to the debaters. Both sides are highly encouraged to follow up with their opponent after hearing answers on the other side. And of course they're also free to respond to each other's points raised during the opening statement. The last round is audience questions selected from Twitter. And we'll end with concluding remarks from both debaters. Currently our Twitter poll shows roughly 23% agreeing that ETH 1.0 can scale, and 65% disagreeing with that motion. We will have a post debate poll and whoever tips the ratio more to their side wins the debate. Okay. Let's get started with the opening statement. Uri, please go ahead.

Uri:

Thank you. I appreciate that. A lot of people in the blockchain space refer to this technology as something which is very complex and very complicated. And so if there is a scalability problem, which is: Can the blockchain handle a lot of transactions, not four transactions per second, like Bitcoin, or 10 like Ethereum is handling now. Can you do hundreds of transactions per second, thousands, tens of thousands, beyond that. So people refer to this problem as being complex. They think it's super technical and it has to be very complicated. But in reality, it's kind of simple. The way blockchains work is that people make transactions, right? I'll send Richard one Bitcoin or I'll send Mrinal, one ETH, I'll create a transaction saying send some ETH, for example, from me to Mrinal. And I signed the transaction and I send it to everybody in the network. And so if people send transactions, these transactions propagating the network and miners or validators here of these transactions and try to aggregate them into blocks. So blocks are just like a long list of transactions with the biggest metadata like conversion and timestamp, et cetera. And then they have to propagate. Once somebody created the block, he has to send this block to everybody else. So the next person could add the next block afterwards. And the result, as we all know, is a chain of blocks, the blockchain, which contained all the transaction that ever happened. Now here's the thing, you can't add the next block before hearing off the previous one. And so if you've tried to make the block 10 times better or a hundred times better... If I mined a block, which is a hundred times better, it takes me a hundred times longer to send it to my peer, it takes me a hundred times longer to reach everybody else in the network. And so the time between blocks has to be increased by a factor of a hundred. So the rate, like the frequency of blocks goes down by a factor of a hundred, and you go back to square one. You're doing a hundred times larger blocks, but a hundred times less frequently. And this is the scalability bottleneck. Okay? The idea that if you try to do blocks larger, that you take proportionately longer to propagate, and therefore you have to space the block at the same proportion, this is not only a bottleneck, it is THE bottleneck. We have seen that you can run blockchains at significantly higher rates than the current being used. In Ethereum in Bitcoin and others, we were able to run Bitcoin at a peak of 3000 transactions per second, just by allowing for blocks to propagate faster and doing larger and more frequent blocks. And for this discussion, we were able to take Ethereum as is, and run it at hundreds of transactions per second. With the only thing that we did was affecting the network layer, allowing blocks to propagate extremely fast. And so can Ethereum, ETH 1.x, can Ethereum 1 scale? The answer is absolutely yes. The only thing you need to do is tweak a bit, and improve a bit the way blocks propagate, and the propagation of blocks, and that solves it. The other bottlenecks down the line, processing time, how long it takes to do all sorts of operations, they are orders of magnitude far away, and therefore not a bottleneck at all. Now, it remains to be seen if they turn out to be bottlenecks. And that's the argument why Ethereum 1.0 can scale, just because there is a single bottleneck as the network layer which is solvable.

:

Okay, great. Powerful opening statement, Uri. Mrinal, please go ahead.

Mrinal:

To start off, I by and large agree with a lot of what Uri said, especially about the congestion of the network layer. However, I would like to talk about two things specifically. One, I do think the problem is a little more complicated than Uri said. And what I'd say specifically there is, in a completely decentralized system where people can choose any sort of hardware stacks they want, choose to be located anywhere and might have extremely different latencies, etc, it might be a little harder to make sure that block propagation is fast enough. That being said, it could still happen. But secondly, you also want to make sure that you've heard from everyone or at least most of the network, and most of the network has opined on every single block, reason being that's where the inherent decentralized security of the system. The second main thing I want to discuss about is what really is scalable? I think it's three things, not just one thing. I think everyone focuses on throughput, and I think throughput is very, very important and I think that, defining throughput as number of transactions per second. I think the reason Ethereum 1.0 cannot really scale long-term, is because proof-of-stake versus proof-of-work is just a much more efficient system. Whatever scaling we get on the proof-of-work system, even with better block propagation, will be infinitely better... not infinite, many times better in a proof-of-stake system. And it comes down to one fundamental thing. In a proof-of-work system, 90 to 95% of your processing power is used to generate hashes. Fancy word for random numbers, which is what gives us systems underlying security. Redirecting that 90 to 95% processing power to doing actual useful work like processing transactions, which is what happens with proof-of-stake model really increases throughput and increases the amount of time that your computers are spending on actual formation of consensus and actual transaction processing. The second part of scalability really is security. The security of a proof-of-work system is completely related to the emission schedule, because the amount of hardware you're going to put against the system and a proof-of-work system is directly proportional to how many ETH or how many Bitcoin you're going to earn in rewards. However, in a proof-of-stake system, the underlying costs to attack the system is actually the total amount of stake that's staked on the network. So as the network grows more valuable, there's a one to one correlation in how security increases. And I think this is really, really important for scalability because as systems get faster and more widely used, you want security to go up proportionally. And the issue with proof-of-work is you'd have to play with emission curves to really make sure that, the incremental cost of hardware is enough to keep attackers at Bay. Whereas in proof-of-stake, you have a natural system that has a bonded stake and prevents attacks. And then finally I realize the scalability also comes down to developer adoption. And ETH 1.0 uses a proprietary programming language, solidity. That's the most widely used smart contracting language on the planet. But there's like 15 to 20,000 blockchain developers in the world. Now you contrast that with 26 million developers worldwide. It tells you how under-penetrated the industry is. And if you look at polls that are conducted, the primary reason for this is because people aren't familiar with the architectures and programming languages that you use in blockchain. People want to use rust, they want to use AssemblyScript, they want to use Python. And this is a direction that Ethereum is going to, and it's ETH 2.0, 3.0 builds, where they're going to a WebAssembly type bill, which will support open programming standards. And so all three of these issues, throughput, security as well as developer adoption really get much better in the ETH 2.0 and ETH 3.0, which is my belief that, while these 1.0 can scale a little bit more, with, great modifications at the network layer, I don't think it's enough to be sustainable across all these three factors. And I think that's really the reason why they're going in the direction that they are. That being said, I do think, fixing the bottleneck at the network layer is very, very important.

Richard:

Thank you Mrinal. Definitely lots to unpack from both sides, but let's proceed to round two, where I'll be directing questions to each of you. My first question is for Uri: Can you provide a quick rundown of the various blockchain-agnostic scalability solutions out there? For instance, Celer network is a layer-two solution with the same value proposition or similar value proposition, albeit approaching from a different layer than Bloxroute.

Uri:

Sure. But I will really start with a bit referring to what Mrinal said, because he brought up a lot of interesting points. I don't agree with everything that he said. One thing he said, it's an argument that I hadn't heard a lot and they, I think I agree with it. The amount of tooling and the languages that you can use in the blockchain really is going to affect how fast you're going to get traction. Like the speed at which you're going to get more stuff happening, etc, and that's part of growing up, and part of the scalability. So it's not a standard way of defining scalability, but I agree that that point is important. I don't agree the same way when he said, well, the security has to go hand in hand with the importance of the blockchain, etc. It's an important point, but I don't think it relates really to the scalability. Scalability is really defined as can the blockchain handle a lot of transactions, less about, can we get people to play with our blockchain, and can we get them to participate? So just referring to the point Mrinal brought up. Going to your question, it's actually one of my favorite questions. I think. I think a lot of people are so deep in this space and everybody are running in all direction and it's really hard to keep up with what's happening. But having a concise understanding of what is layer-1, what is layer- 2, what is layer-0, how do they play together? And these are fancy words, channels and all these kinds of things. So I'm really glad that you brought that up. The quick rundown is the following. Well, the blockchain as we, as I said earlier, is when you have participants, right? You have people running nodes, people just sending transactions and they form a consensus, right? Somebody creates a block, it goes through everybody else, everybody starts working on the next block, etc. That we refer to layer-1. Why? Because layer-2 is what comes on top of it. Okay. The idea of a layer-2 like Celer, okay, and I'll give a few examples in a second, is the idea that rather than making a transaction, sending it to everybody else and then wait for that to grow on the blockchain, you could do something like: Oh, if me and Mrinal, we'll just send transactions among ourselves, can we do something that we just transact just me and him, and not put it on the blockchain. But eventually or periodically, at some point, we'll commit to the blockchain. So we could do something like... if people heard of state channels, the idea that maybe I'll put 10 ETH, and Mrinal will put 10 ETH, and now we have a joint account, a channel, which when it closes, I should receive five ETH back and he should receive five ETH back. But now, maybe we can update that. I can give him without telling everybody else, I can give you 1 ETH. So now it's six to him, four to me, then you can pay me three, and now it's seven for me and three for him. And we could do that an infinite amount of time, and as fast as we can do that. And eventually, let's say a year from now, we say, like, oh, let's settle this. Let's put it in the blockchain. Right? And then, based on that state of our channel,of our joint account, each day, get his correct portion. This is the high level of a layer-2. Now the complications around layer-2 is how do you make it in a secure way? What if it was five, five, and then Mrinal send me three ETH, now I'm supposed to get eight and he's supposed to get two, I go offline. I go do some stuff, and while I'm not looking, Mrinal takes the older state, where it's five, five, which is better for him and put that on the blockchain. And then if I'm not aware of that this is happening, I might lose my money. And so all the complexities around layer-two, I think that's happened on top of the blockchain, really go to that area of preventing these kinds of frauds. If you think about ZK-roll up or optimistic rollup, really any kind of fraud, they all fall under the category of, what's it called, fraud proofs. If Mrinal is behaving this way, there are actors, whether it's me, whether it's somebody external to me, like a Watchtower, which shows proof that this is problematic. We chose like, oh, here's a more updated, more correct state, and I will get my deserved share and Mrinal will actually be punished on it. So they are doing these things that periodically interact with the blockchain. Okay. It could be just one at the beginning and one at the end. It could be that every day we settle. So the difference isn't too great and we can have a time-window, in which we can play with these kinds of things. So these happen, where we operate layer-zero, is just the network layer, right? The idea when, when in the blockchain where layer-one nodes send a block to one another, we're just a faster internet for blockchains. The idea is that you can send a block from one person or from one computer to another computer extremely fast, even if that block is very, very big. This is what we do. And the same way that your computer doesn't know that its data is being sent on a copper wire or optic fibers. The same goes here. It doesn't know what happens at the network layer. Just ask to be sent, right? It's being handled lower in the stack. And so if you look at the entire blockchain stack, you have layer-zero, which is just happening at the network layer and allow us blockchain participant and blockchain nodes to send messages and blocks and transactions. You have layer-one, the consensus. And layer-two extending this consensus, by just only periodically interacting with it.

Richard:

So Mrinal, would you like to counter any of the points that Uri raised?

Mrinal:

So let's start with Uri's comments on what I talked about. I agree with the programming languages and broad distribution as well. I'd say where we have a slight difference. It's kind of nuanced. I guess Uri is saying, scalability is primarily about throughput. And I add it's about throughput and security. And here's why I think the security is very, very important. you can get any amount of throughput you want. If you start making compromises to decentralization and compromises to secure. And the moment you say, Hey, I'm going to relax my threshold or how many people want to actually participate in consensus or you relax your threshold on how decentralized the system is and say, you know what, I'm just going to put two nodes in charge and let them do what they want. You can get throughput without almost fixing anything. And so the reason why I say they're both important is, you need to have security that scales with the network as well. Otherwise the throughput doesn't really make sense. But it's a nuanced point. And I get that. If there's a KPI measuring, it's probably TPS. And then you have the backdrop, of this is a TPS but this is compromise being made or not being made. So that's the only, small point of difference I have there. Although you said about layer-zero, layer-one and layer-two, I happen to actually agree pretty much a hundred percent with everything you said. I think that was a really good way of breaking it down. when I refer to layer-two solutions, the examples I really like to give are, we've seen this happen before, like PayPal, Stripe, square, a lot of these payments solutions tend to batch transactions before they propagate them to the visa or MasterCard networks. Now that's not a perfect analogy but that's kind of how a layer-two and layer-one works, layer-two kind of batches things. And I think now, the example he gave about state channels is really, really good. I also agree with the description of layer-zero and I agree with why it's super important. Where I have slightly more nuanced you is I think layer-zero is great for Ethereum 1.0. But I actually think what Bloxroute is doing will be even more powerful for what Ethereum 2.0 and 3.0 are doing, and what we're doing at CasperLabs, because in a proof-of-stake system, a lot of the consensus is just based on message passing. There's very little of this,"guess the hash" work going on. And so, a good layer-zero could be even more powerful. So whatever benefits, layer-zero gives a proof of work layer-one, I think it could give even more well-constructed proof-of-stake layer-one. But long winded way of saying I pretty much agree with most of what he said. I just wanted that some additional context in my views.

Richard:

Regarding security as a part of the broad definition of scalability-- So I totally agree that ultimately if you are sacrificing security in pursuit of scalability, then that is not a good thing to do. But the question is, does this layer-zero solution that hopes to scale Ethereum 1.0, come at the expense of security in any way? And I was hoping to get some perspective on this from both of you.

Uri:

So I think one important thing, going back to what Mrinal said, or even before that, Mrinal is right to say, listen, you can do like whatever you can do for ETH 1.0, it's even better for ETH 2.0 I agree with that. There is an argument to be made there. I, at my core, I am a networking guy. Even before being a crypto guy, I am a networking person. Is there some hole in the idea of proof-of-stake in terms of security, which had been patched the right way? And is that absolutely true? I think it's true, but that's not where my expertise lies really like deeper than anything else. But if we take it for granted, is that yeah, it might be true. It might be that the consensus that's being done with POS is maybe more secure and maybe faster than what we have in Ethereum 1.0. That being said, that is so far down the line that it seems stopping Ethereum 1.0 from scaling, and working on scaling Ethereum 1.0, and hoping for Ethereum 2.0 to save the day, instead of just scaling Ethereum 1.0, seems like I would even say a fool's errand. Why would we do that? Ethereum 1. 0, and then pushing that even further regarding security, Ethereum 1.0 has so much going for it. So many people working on it, including the great people at Casper, right? CasperLabs, like the research coming from that is terrific. The ideas are great. Some of them could be applied immediately, some of them will be applied in the future, but that doesn't really have to do with scaling Ethereum 1.0. And because Ethereum 1.0 has so much going on for it, I go back to Marinal's point regarding the security. The security of Ethereum 1.0 is terrific, and greater than more than any other of the other blockchains because so much is already invested in it. And so even if it's not as optimized and as efficient as other blockchains and consensus mechanisms and systems, even at the current imperfect model, I would argue that its security is actually greater than others.

Mrinal:

Why? Just because a lot more people care about that versus some, take blockchain, CoinMarketCap#100. Okay. Even if it's really great, even if the tech is terrific, if not enough people are invested in it, it likely, but it's not going to get, if we're talking about the benefit of, security to scalability. So I, to boil it down, even if Ethereum 1.0 security has inefficiencies which can be fixed, it doesn't make it not secure. I would even argue that it's more secure just because of how much stake in terms of hardware and money is invested in it. I actually don't disagree with either of what you said. I just had a couple of nuances. So, let's go to the first point where you talked about, it's, it's a fool's errand not to scale if you're in 1.0 and just wait for 2.0. I actually have to agree with it. I agree. Because any, any innovations we do at the network layer to make that faster will carry on to 2.0 anyway. And so, it won't be wasted work. There might be some form of rejiggering this required, but it's all good work. My, my point is just 1.0 versus 2.0, 3.0, just won't scale as much. So, my view is that all the work being done at the network layer to make it faster is good. But the system does need to evolve. Let's talk about your point on security because you actually make a really strong argument and I don't disagree with it, but I think I need to clarify what I meant. And so why meant is, in a proof-of-work system to relative security relative to the size of the network, and in a proof-of-stake system, the amount of security relative to the size of the network is quite different. And it's better in proof-of-stake. And let me explain why. And you're absolutely right that Ethereum has much higher security than smaller chains, but maybe not as a percentage of the value of the network. I'll explain why. So the proof-of-work system, the asset that you're putting against, assets that are securitizing the blockchain is your[ inaudible] that's generating hash power. And to your point, billions of dollars have been spent on hardware to run the Ethereum network since its inception in 2014. However, the point at which it's efficient to put that hardware is related to the emission schedule this year. So let me give you... let's just put numbers out there and make this very clear to people. Assume if your network is worth$20 billion and assume it emits 5% of its total value every year. So that's a new 1 billion dollars coming into the network. So the optimum amount of hardware that you will use to secure that work, or new hardware purchases is just sub-$1 billion.$1 billion is the revenue, you choose what your cost wants to be, and that's the profit that the miners get. Hence the cost of attack to attack to the network, if you want to do a 51% attack, is also that amount. So call it$800 million, if you're assuming, you know of, if you're assuming that 20% profit margin. Now think about a proof-of-stake network where half the money is already staked. Now you have a$20 billion network, there's$10 billion of security against that network. Meaning in order to attack the network, the amount of stake you need to buy is equivalent to almost$10 billion to have enough of the say within the network. And that's really what I meant, like, in proof-of-stake versus proof-of-work, for a network of the same size, and proof-of-stake, you actually have more economic security. Now I'm vastly oversimplifying here. But your point is true, right? Will Ethereum with proof-of-stake have more security than someone who's number 100 on CoinMarketCap? Yes.'cause the cost to attack that network is lower on an absolute basis, while it's still may be higher on a percentage basis. I do think overall though, as the blockchain industry moves towards more mainstream adoption, anything we can do to keep increasing the cost of attack is just a good thing. But I don't disagree with the points. I was just trying to like clarify, that I meant it much more on a percentage basis than an absolute basis. And I hope the example kind of clarified the point I was trying to make.

Uri:

So Marinal., you brought up a really great point. And it's actually one of the more concise and coherent explanations regarding the value of POS versus POW that I've heard. With that being said, if we brought in a Bitcoin maximalist to the show right now to argue ferociously against it, what would he tell you? Tell me the counter-arguments to what you said. Why would he argue that's the current POW, as you see in Bitcoin, and again we're talking Ethereum, but if I brought in a really diehard pow person to the show and he were to respond to you, what do you think he would tell you? Why would he disagree with you and what would be his strongest argument?

Mrinal:

Yeah, so his strongest argument would be, a lot of this would be future-based. So one argument they make is hardware is kind of table stakes, versus with the token, depending on how the team decides to distribute a POS token, they could have concentrated value. What I mean is, a token is a token, right? And you know right up front and the team has decided we're going to keep 30% of the tokens. They have centralized 30% of the power. And I think that's why teams have to be careful not to do things like that. And so what they say is hardware, anyone can acquire, and as a result you've made it table-stakes. However, if you don't replicate that model in proof-of-stake, and make sure that the token can be broadly access, and no one's holding like large percentages of it, you basically just replicated, like a cartel. So that would be argument one, they'd make. Argument two, they've made is that, a lot of Bitcoin maximalists love the fact that there's a decaying supply curve on Bitcoin that, basically the inflation schedule, reduces over time. Now I would argue that this is... unless they solve this problem, it's a ticking time bomb. It's not a problem right now. But to my point, you only put as much hardware, as much profit you can make off the chain. when you're getting 6.25 Bitcoin per block, that might be okay to sustain a lot of hardware, but at some point the cost to attack versus the cost of hardware will flip the other way. I don't know if that's at 3.125 BTC per block or 1.625. I don't know at what point that happens, but you could model it out. But they love the decay curve. And I would agree with them, for the pure store of value, it probably makes sense. And then finally the other thing they say is like, another issue with a smart contracting platform is that, it's actually not that necessary. You can build all of that on the application layer. You should just have a very simple protocol, because a simple protocol is much harder to break. That's why Bitcoin, unlike Ethereum, had their miner DAO hack. Bitcoin has been more resilient. These are the strongest arguments I hear. I don't agree or disagree with them. I think it's a completely different use case versus a smart contracting platform. And I do think the decaying curve is something that Bitcoin as a community is going to have to solve. the current, well the current view is that transaction fees might be increased in lieu of like having the decaying inflation curve and therefore a miner still get rewarded. And we could still run a lot of hardware profitably. But those are, those are the biggest arguments I hear. And I agree with the first one the most, which is they say, there's no point in proof-of-stake if the, that state is ultra centralized across a few people kind of defeats the purpose

Uri:

First, I appreciate you bringing the counter arguments and that's always hard to do. Pushing that even further, I would argue that, and this first argument that it's not only the teams that can get a substantial portion of the tokens in the staking. In the current crypto landscape, exchanges, especially centralized exchanges, play a significant role. And if you look just take like the top three or top five, they control so much staking that that becomes, I feel that that becomes dangerous. And so yeah, that's a small point. I would now, like to, turning it back to"Can Ethereum 1.0, can it scale now or not"? And I want to go to the question that one of you brought up. Yes you can increase scalability if you hurt decentralization, but if you have just two nodes and they're working with one another, yeah, you can make it scale much further if you sacrifice that. And the question is,"can Ethereum 1.0 scale," without sacrificing de-centralization. And I would argue that it can, mostly because the idea is that, any improvements on layer-zero must be so that they're probably neutral. Nothing that a layer-zero system could do, can affect the blockchain, can reduce its scalability, can censor or discriminate between block transactions, miner, validator, or anything like that. And further than that, it must not be a single point of failure. So I'm not saying that, oh well if we just had faster internet somehow, some company put really great optic fibers in the ground, but now everybody relies on them, that's not the argument that I'm making that the Ethereum 1.0 can scale. My argument is that, without affecting de-centralization, okay, without requiring better hardware, more storage, more bandwidth, without affecting any of those, Ethereum 1.0 can scale right now, because there is a way that, we work very hard to make it happen, for nodes to send one another blocks extremely fast. The basic idea is we have a system that allows all the nodes to remain in sync regarding transactions that waits to be added to the block. And so when somebody creates a new block and he needs to send it to everybody else, he doesn't actually have to send the entire block. It only needs to say which transaction are in there. And so, on the wire, actually much smaller pieces of data are being sent. And this only works if, even if, Bloxroute goes down, and even if we were being shut down, if everything must continue to operate, there's an idea of backup network. So going back to what Richard said at the beginning, yes, there's definitely a push-back and a balance between decentralization and scalability, but it doesn't affect this argument that we're currently having regarding Ethereum. 1.0. If Ethereum 1.0 can scale at least to the hundreds of transactions per second, and I'm saying at least, because we already see that, like we were able to make that happen. And whether there is something beyond that point, that requires tweaking for sure, plenty of things, like the code isn't written generally well enough to handle the high volumes, but to not in a way that affects decentralization. Nothing that's being done there is going to be, Oh, we're increasing scalability because now less players are capable of participating.

Mrinal:

Yeah. So, Uri, just to clarify, I actually don't disagree with what you just said. The reason I brought up security and decentralization was just to say why I think that's also part of scalability. I mean, just looking at throughput without looking at security and decentralization might be a little shortsighted, because you can get throughput by sacrificing[inaudible], I agree with you. Fixing stuff at layer-zero shouldn't in effect, if done properly, affect decentralization or security in any way. If we're just getting blocked, propagated faster, with no censorship. Yeah, I don't disagree. The point I was trying to make was, why I think security and throughput go hand in hand. Because I think, and again this is broader than just ETH, right, I think when people view blockchain infrastructure in general, I think sometimes if you just look at a TPS number and say, okay, that blockchain is better because it's got a higher, I'm like, well, you've got to figure out why. Sometimes it's just like, okay, I like these five guys. These guys are going to decide everything.

Richard:

So Mrinal, can you elaborate on the various bottlenecks in Ethereum 1.0's design that prevents scalability? I know that you have sort of touched upon this a little bit, but maybe any additional color would be helpful. Also in which ways will the migration to 2.0 resolve such weaknesses? And here's also an opportunity for you to talk about how CasperLabs looks to differentiate yourself in comparison to ETH 1.0 and/or ETH 2.0.

Mrinal:

Let's at a high level split this into a few pieces. Which are, there's thematic bottlenecks here and I'll talk about each one. So theme one is really the shift from proof-of-work to proof-of-stake. And we already talked about why I think proof-of-stake is a better system both in terms of efficiency as well as security overall, if managed right. Nothing's free. There's give and takes on both sides. So I think that bottleneck that gets solved by ETH 2.0, more so ETH 3.0, so the way they've described Casper FFG right now... Plans go back and forth a little bit as, as it does in software engineering, they're going to have an interim step called Casper FFG, which really has, proof-of-work, Nakomoto style consensus for the most part and overlays proof-of-stake as a finality system. So, a lot of the network will look almost exactly the same because it's a proof-of-work, proof-of-stake hybrid. But then they eventually go to Ethereum 3.0, which is a 100% proof-of-stake system, running a CBC Casper or CBC Casper variant. So, we, and one of the reasons why we started the company was let's just build that end state proof-of-stake system, that is fully decentralized and secure, which I think is actually a unique characteristic ours has. We don't have any, two node structure. There's only one type of node. It is fully permissionless; it's fully decentralized. So we don't sacrifice any of that. But it's very in line with where Ethereum wants to get to eventually. We just thought, hey, why don't, why don't we just do it right now. The second, I'd say scalability bottleneck is that, if you think about proof-of-work systems in general and the way Ethereum of VM works in particular, it's a highly serialized system. Meaning the ordering of transactions is extremely important and you can't really run things concurrently even if they don't depend on each other. For example, me transferring money to you, Richard, and then Uri transferring money to Shen, these two transactions don't talk to each other. Really they're additive and therefore commutative. And therefore can be run at the same time. And that's really an issue when you have like a serialized blockchain. What a full version of CBC Casper does, which is, where they're going with 3.0, is you actually build a directed acyclic graph, and I won't get into too much detail. But in essence you can add a level of parallelism and concurrency that you otherwise could not. A great way to think about that as your laptop or your mobile phone right now. You know the system that plays the sound system that puts up the video or the one that knows your calculator. These are all independent systems and they can run at the same time. There's some interdependencies which are explicitly declared, but when, when they aren't, they can run on their own. And so similarly, you want an infrastructure layer to behave like that. And so the shift from, serialized blockchain to something that supports parallelism and concurrency is another bottleneck. And we're solving it by basically implementing a DAG based protocol, right out the gate. That's literally what we're doing at CasperLabs. And then finally, the other bottleneck is the one we've been talking about in terms of developers. I'll use this to[inaudible] my answer to Ethereum. So, having a proprietary programming language like Solidity, or any programming really, it takes a lot of time for it to build. And Ethereum was by and large, the 800 pound gorilla in the room. A massive market leader. It's 10 to 15,000 developers out of 26 million. So, it's a very, very under-penetrated industry. And so, we solve that because we have a lot of assembly based build; we'll support rust and assemblyScript right out of the gate and a bunch of others as well. Second[inaudible] to Uri's answer. First off, I'll start with, I'm a massive fan of Ethereum. I've been in the industry since 2012. I bought Ethereum tokens as early as you could possibly buy them. And I've been a supporter of the project for a long time. That being said, and I agree with you, a product to take over the network effects of a market leader, needs to be a hundred times better, or 10 times better. The two caveats I'd make to that though are one, I think we're very early stage in this industry, and it's not really, in its full competitive state. I mean, if you remember search engines, the same argument was made for AltaVista, the same argument that was made for Yahoo. And then you had Google. At every stage people are gonna say, AltaVista will never be beaten by Yahoo, never be beaten by Google. Google, I agree with, because that's the time when the industry actually got fully penetrated.[inaudible] everyone in the industry is fully penetrated. So, Google had full penetration, basically 60% of people worldwide in search when they took market leadership, and they had an 85% share. In an underpenetrated industry where 0.6% of developers are using it, there's just so much more white space. I agree within blockchain, meaning all the people who are in blockchain already, it's probably impossible to compete with Ethereum, or many of the established blockchains. But you have to look for the white space. It's a hugely under-penetrated industry. Similar thing happened with operating systems, right? Like IBM had 95% market share and Microsoft didn't go after corporate end user first along with Apple. And the whole landscape has changed completely. So because the industry is underpenetrated, I think there's still an opportunity. That being said, I'm also a massive open source enthusiast. You know what we're doing at CasperLabs, we want to build what we hope to be one of the best blockchains out there. We're very, very focused on decentralization security, but we're also open source purists. And if at the end of the day it leads to innovation in the industry, and part of what we built is used by other projects that helps push the industry forward, we'll still take that as a win. obviously we're competitive and would love to see, us turn a lot of developers towards the industry. But it's an open source world. Glad to help bring some thought leadership and hopefully it's helpful.

Richard:

Well thanks for that answer. So I have another set of questions for Uri. My understanding is that in terms of UX, in using Bloxroute for your layer-one solution, all that's needed to make use of it is for Ethereum nodes to download and run your software. Is that understanding accurate and what are the challenges for convincing the nodes to use that software? That's the first question. The second question is, so if we go back to the actual use case of Ethereum, let's say CryptoKitty at a time when there are many transactions happening, as people transact with CryptoKitty, as they breed kittens, the block was quickly full and then other legitimate transactions on Ethereum couldn't get through in time. So if we were to replay that scenario, but with some kind of layer-zero solution being in place, what would that scenario look like?

Uri:

So first to your UX question, that idea that like, oh, for this to work you need nodes to utilize to run Bloxroute software and connect to their, have their nodes work with it, etc, is that what's needed? Not really. In terms of what do you need for Bloxroute to happen? Nothing. Okay. Bloxroute is deployed. There is a system that allows the blocks to propagate faster. Nobody needs to connect to it. However, miner s connecting to Bloxroute hear blocks faster than others, because, and the more who joined, the better. So even if you're the first miner to join, then because Bloxroute is already connected to nodes all around the globe, you're going to hear blocks faster. That means you can start mining, working on the next block sooner, and you're going to make more money. And therefore nobody needs to connect to it. But miners make money. It starts as a competitive advantage and really quickly it goes into a competitive necessity. If everybody else is connected to the fast network and you're connected with just to your peers, it's like you're running your operation with, I don't know, 512 kilobytes like analog modem from like 97 or something. You can do that, but you're going to be slower than everybody else and you're going to make less money. And so the first people we started working with were miners and pools telling them, oh, here's the technology. Don't change anything in what you do. We built blocks starting in such a way, that nodes don't even know that they're connected to Bloxroute. We give you, if you run a node or a miner or a pool, we give you an open source gateway. You run it on your machine, you tell your node: Connect to this gateway. So he thinks it's a peer. Just a friendly neighbor peer. It's a peer but sits on the same machine. It doesn't trust it in any way, but that peer is going to tell him about blocks from the outside faster than his other peers. And when he's being told the block, he would tell the others. However, Richard, if you are using Ethereum, you don't care. Like you don't need to connect to Bloxroute. You don't need to know that Bloxroute exists. Again, the same way that you don't know whether the wires connecting under the, you know, Japan with mainland Asia, is that optic fibers or copper? You don't know. You really don't care. So by making miner s and nodes connecting faster, that allows for scalability. Nobody. If you are a user, if you're running a node, if you're mining, you should connect to us, not should in terms of, that would be more profitable to you. You really don't have to. It gives an edge. If you are a user, if you are running a node, if you're Infura, if you're Ethereum, something like this, you don't have to, you don't need to. It does offer some services and advantages that you can choose to connect. The same way if you can choose to connect to Infura, you can choose to connect to ETH gas station, you can choose to connect to Bloxroute. Because Bloxroute will tell you about transactions as they happen really, really fast, which is important for DeFi traders. Or it has a really, if you make a transaction, you gave some gas, usually you have to just wait and see, well, let's see if it goes, when it goes, and we'll give you a feedback. These are kind of that. Then reducing fees, like we have some services that we offer. But going back to your original question, for you, for this, for Ethereum 1.0 to scale, what do you need? I don't, I need very little. I give it away practically for free. I don't, I give it to miner s and pools. I allow them to hear a block faster, and then it's in their hands. The way Ethereum works.

Richard:

So to Mrinal's earlier point though, does this introduce a point of centralization? So for example, let's say you have a finite number of Bloxroute servers installed, and then all these miners are incentivized to connect with them in order to hear blocks faster. Then does that mean your entity is now sort of in control because you have this very superior offering, but it's sort of concentrated in one entity's holding?

Uri:

That is a great point. If that was the whole story, the answer would be yes and nobody should use us. Nobody should use us. If that was the case. However, we made sure that we can't discriminate or censor, right? We can only, as I explained earlier, that we keep everybody in sync regarding transaction,

Richard:

How do you prove that, that you're not censoring?

Uri:

There are tons of technical details, but at the high level, here's how it works. Every few hundred milliseconds we send to everybody, oh, here are new transactions. Okay, so these are sent to everybody, all the gateways and through that to all the nodes who are connected: Here on new transactions, here are new transactions. So these nodes, everybody who is listening to Bloxroute knows that the others know exactly the same transactions. These[inaudible] that we send, we sign,... so we timestamp them and we sign them. Every time a gateway receives such an update from Bloxroute, he tells these gates..., his peers, the other gateways. So gateway has created a peer to peer network of their own. They say, oh here's the update that I've got. Here's the hash,... that's a hash of it. Did you get the same? So everybody verifies, that everybody got to the same and we can't give, I can give one transaction, one ID and then not tell it to somebody else. Like I can't give inconsistent updates. Or, if I do that, it's immediately visible. And what happened in that scenario is the same thing that happens if Bloxroute goes down. Okay, so, but using this mechanism, it is visible that we're misbehaving. The solution to the misbehavior is assuming Bloxroute went down. What happened when Bloxroute goes down? Well, we introduce the concept of backup networks. Anybody can take our code and deploy a backup network, an idle version of Bloxroute. Okay. It's a network that can do the same thing, connect to everybody, but in reality it doesn't do anything. So extending their idle, waiting for the doomsday scenario that Bloxroute either goes down or misbehave. If that were to ever happen. Okay, we're being shut down by the government. We went to Vegas and we lost all our money. Okay? We're being forced, I love to call the Steven Seagal scenario, somebody kidnaps my family and forces me to reject transactions or blocks... I am coerced to do so. If that happens, everybody sees that that happened. Whether because we send in consistent updates or we just went down. And then move to use backup network number one and everybody knows who's backup network number one. If that misbehaves too, everybody goes to backup network number two, number three, etc, and the incentive to use a backup network is that if you are a miner, if you are a pool, if you're a big business like Coinbase, okay, if your operation, if you have significant stake in the blockchain continuing to operate, this is a cheap insurance policy, okay? You run a backup network, it doesn't do anything, so it doesn't need to incur a lot of costs. It costs you a hundred dollars a year or$300 per year, depending on the size, etc. It doesn't do anything. But in the worst case scenario, if you're a miner, you know that you're, even if Bloxroute goes down, your blocks would propagate fast. You'll hear about blocks from the others. And everybody can continue to use the backup network for six months or for a year or for two years, like at any period of time. It's not as cost efficient as a Bloxroute, but it's than going back to 10 transactions per second. And so the idea is that there are people who has stake to run these and they are the safeguard against Bloxroute. So to wrap it up to your question, if we were capable of sending inconsistent updates, or if we were capable of being, like if we go down, everything goes back to how we were, nobody should use us. But we built it in such a way that that is not the case. Again, I'm not, I'm not trying to argue why Bloxroute is awesome, because I don't want the question to be about blocks road, but they think that gives the high level idea there. Does that make sense?

Richard:

Yes. And Mrinal, so we're discussing whether ETH 1.0 can scale. And a big part of that is all these solutions that could help make things happen such as a layer-two solution or a layer-zero solution. In particular, today we're fortunate to have someone from the layer-zero world, basically describing a method of increasing block propagation without giving into the trade-off of decentralization, it seems. So it almost looks like having your cake and eating it too. If we were to establish that fact, do you still see issues with Ethereum 1.0 not being able to scale beyond what you mentioned about the adoption problem?

Mrinal:

I do think layer-zero and layer-two solutions will make it faster than it is right now, but nowhere near as fast as where it can be. It goes back to my point where I just think efficient management of resources is extremely important and that's why the shift from proof-of-work to proof-of-stake is really, really important. Both in terms of, if you're getting a few hundred transactions per second, with a great layer-zero on proof-of-work, you're going to get thousands if you move to proof-of-stake. Just because of the redirecting of the processing power. If you're getting good security hardware because of, 3-5% emission rate that I covered, you're going to get even more security with 50% of token staked on the network. So regardless of improvements made, it won't compare to the level of improvement you can see when ETH 2.0 or 3.0 transition happens, or you've run a similar system on a CasperLabs or some other network, assuming they become big as well. So that's one. And the second thing is just to, dig into the serialized nature of 1.0 again, shift the proof-of-stake in, especially, like a CBC Casper style, consensus mechanism, will enable a level of concurrency and parallelism, that doesn't exist. The issue with proof-of-work systems is since they're highly serial, you can't take advantage of, very broad footprint of computational power. When you shift the proof of stake, especially one that is directed acyclic graph based, you actually get so much more efficiency, because you can have a lot of concurrency and parallelization. So I do agree, it can get faster, and it can scale a little bit more. My point is for it to truly scale and truly be efficient, that transition from proof-of-work to proof-of-stake needs to happen. That's why there's so much work in research going into it. I don't disagree with the point that a layer-zero doesn't necessitate a problem with decentralization or security. That I think we're fine. I'm just saying, all of that will look even better as those transitions happen.

Uri:

I would argue that, regarding the question that we're discussing right now, can ETH 1.0 scale, I think it can scale, it can scale a lot, but Mrinal is right to say, hold on, we can do better. Like we can build a blockchain, it can work in great parallelism. And so for everything that you can achieve with ETH 1.0, we in CasperLabs believe we can do 10x a 100x, I'm not sure what their number is. But, that is a good and strong argument...

Mrinal:

Or even ETH 2.0, or 3.0, I think we'll be much faster, right?

Uri:

At the current state of the ecosystem, when Ethereum 1.0 is the market leader, we are working with the miners and the pools. So if the audience isn't familiar, unlike Bitcoin where you have to change the protocol to increase the block size and scale the blockchain, Ethereum miners and pools can vote every time, do they want to increase it by a bit or decrease it by a bit. And so if majority of miner s vote to increase the capacity of Ethereum blockchain, it increases the capacity. It actually happened in around September 1st, 2019. It was increased by 25% which is a lot. That was after... So we at Bloxroute work really closely with the miners and the major pools, exactly on that. All this stuff out there are terrific and worth having. And I'm very glad that people are investing their time and efforts in capturing them. And I agree strongly with Mrinal. This is early. Think of how few developers even know how to handle blockchains, but Ethereum 1.0 can definitely, scale significantly, like at least one order of magnitude but around closer to two really. So I want to point that out.

Mrinal:

I don't actually disagree with that. I do think you can scale more than it is right now. Just the transition to ETH 2.0 and 3.0 is very, very important, because... we're kind of in agreement here. That's where you'll see, ultra scale, so to speak. And to my point, I think the economic security ultimately increases a lot as we make that transition. But I do agree a hundred percent that, and I'm glad you guys are working on it. That layer-zero and actual limiting of latency, etc. It's great that you guys are working on it. I mean, and I'm sure Uri knows about this really, really well, but if it wasn't for CDNs like Optimi, Level 3 etc, that are basically the backbone of the internet, I mean, I could, I almost think of them as analogous to what Bloxroute is doing for the blockchain. We wouldn't enjoy things like Netflix, etc. So it is definitely, great that people are working on this problem as well. And I think, as we make consensus better, which is what in Ethereum was doing and what we're doing, as we get the network problem better, which is with Bloxroute, and then also batching, which is what layer-two guys are doing, then across the ecosystem we'll finally, hopefully have a stack that's competitive with, what we see out there in terms of infrastructure, like the AWS is in the more centralized repositories of computers.

Richard:

So ETH 2.0 is definitely better scale than ETH 1.0. But to the extent that we're talking about improvement upon existing ETH 1.0 architecture, can we get it to a point where the scalability is acceptable? Is it sufficiently scaled? Which is why I brought up the CryptoKitty problem. So maybe it's better to quantify things, but I think if there's some way to address, if we were to replay that scenario, what would the Ethereum network congestion look like as a result? I think that would be an interesting thing to look at. So I was wondering if anyone has any thoughts there?

Uri:

So I would actually like to point out that it's not just KryptoKitties. I assume you're both familiar with what happened with MakerDAO, on black Thursday. Something very interesting happened there and I think it's important to point it out, for people to understand scalability better, and that would help to illuminate the point that you're bringing. It's not just CryptoKitties. Okay. What happened with MakerDAO is that the blockchain got so congested, that MakerDAO did not operate as intended, because there were underwater assets that people were supposed to b and there were like I think four and a half million dollar worth of ETH. And people were supposed to bid on it, and whoever made the highest bid were supposed to get it. And that was how the market should have made sure that everything... that the assets were priced correctly. But what happened when, when layer-one got congested is that nobody was able to make a bid. And so one person just made the bid and got four and a half million dollar worth of ETH because nobody else was also capable of making a bid. Now it could be that, there was also a bug there. I didn't look closely at the postmortem regarding that, but it does bring up a very important point. Practically all the scalability solutions out there, all the layer-two solution out there, Go back to this, fraud proof, the idea that if, if something is wrong, if something isn't priced right, if something is, there is some misbehavior, you can always show you have like a window of time, in which you can send the transaction and prove, hold on, there is some misbehavior going on. I'm being defrauded here. However, it's like for layer-two to work, you need layer-one to be at scale to have excess scale and not be easily congested. And so if, if there was a full blown operating, the best layer-two solution out there working on Ethereum, during the like that black Thursday and somebody would decide to defraud and say, oh, I'm going to to to commit to the blockchain, some older state... it used to be valid, but now it's no longer valid because like the change the state had changed. Even if somebody saw that, if the blockchain is congested, then you couldn't prevent him from doing it. You can't get your transaction there, or maybe you could, but it would cost you more than you're being defrauded. Now, take that and multiply it by a thousand, okay. Not a single person who's being defrauded. Think of it, a decentralized exchange or something like that that a lot of transactions happen off chain and then somebody with some measure of control, go take that and defraud everybody at the same time. Now you have a thousand people that, under congestion, are fighting to show that they're being defrauded but there is no room for them. So I would argue that you need layer-one to scale even if you have a layer-two solution operating there. So Mrinal might have a take on that because, in the context of on-chain and how Casper might react to that, but generally speaking, this is an issue. You need to scale the blockchain to such an extent that you can do off-chain stuff, but you can also do on-chain stuff if you need to. Does that kinda make sense?

Mrinal:

I agree that layer-two is not a panacea. Like, oh wow, layer-one is too slow, but we can solve this in layer-two. Layer-two is extremely important, an integral part of the ecosystem. But yes, if layer-one isn't fast enough, that's your trust anchor, for lack of a better term. And so the amount you have to batch is, kind of inversely proportional, to how fast the underlying system is. And so you can run into, you know, race conditions like you've laid out. So I actually agree with that 100%. To Richard's question of, is it fast enough? To scale a blockchain to be, competitive trust anchor, de-centralized trust anchor, similar to an AWS or Google cloud or something, I think you have to get into the tens of thousands of transactions per second. Now, I don't know if that's possible in a fully decentralized manner with present technology. We've not seen it implemented. You can see a few thousand, in tests we're seeing in a single shard, again, fully decentralized, permission-less, 1500 plus in tests. And so, you get pretty fast. But you also have to start doing things like, sharding and partitioning of the network, because I truly believe like that number is more in the tens of thousands. I'm not talking about the industry as it is now, right? We're talking about a 0.3% penetrated industry. I'm talking about what happens when it becomes a 50, 60% penetrated industry. That's real scalability. At that point you need tens of thousands because if you think, people like to use visa as a metric, which can peak at around 40,000 transactions per second. But bear in mind, that's just money transfer. If you're actually doing things like propagating every single intellectual property record onto the blockchain, if you want to be monitoring supply chains, even with a level of batching, you can see that, the amount of worldwide, call it writes or commits to the blockchain, could get very big, very fast. We're already seeing things throttling for such an under-penetrated industry, with no real, big industrial use cases yet. Once those start coming on board, the numbers you're going to need are very, very high. So I don't know if a couple of orders of magnitude for the real big prize, is sufficient for the end state. But I do think for current usage of blockchains that will probably suffice. But when I'm thinking about scalability, I'm thinking always about, okay, decade and a half from now, and I'm hoping this ends up happening, blockchains become a major part of the Internet's technology stack. at that point I think we're looking closer to tens of thousands. I don't know what the exact number is. I think you'd have to kind of look at, kind of writes and commits, that are done on the OS. You can of course, partition them into like some portion that required blockchain and some that don't. There's a lot of stuff that, some people I think try to throw everything on the blockchain that doesn't need to be there. It's just that the stuff that needs a decentralized trust anchor. But even that should be tens of thousands. Because if you're thinking about, if you take TPS as like a single write or a single commit, AWS itself is probably doing millions if not billions a second, across their entire infrastructure. So hard question to answer, but I think several orders of magnitude above where we are as an industry right now, which is why I think the efficient shift over to proof-of-stake as well as parallelization and concurrency while in its infancy, are sort of the building blocks to getting toward that scale eventually.

Uri:

I actually think Mrinal here is underestimating. If you look at credit card companies, last year they did an average of 6,000 transactions per second. Now that's average. Alibaba holds the record with 325,000 transactions per second in Singles day, like two or three years ago. But that is for regular transaction, you need 6,000 transactions per second on average. If you want to do micro payments, which are, you can't do right now because they're like prohibitively expensive to make a transaction. Immediately you reach to the area of like 50, 60, 70,000 transactions per second like 10x out there. Add in programmable money, add in DeFi algo trading on decentralized exchange, and practically like IOT device, that[inaudible] with one another. The way we estimate it isn't... at the end game, it is like 200, 300, 400,000 transactions per second. I actually think tens of thousands is underestimating it. If you're looking at supply chains and you're looking at, you want everybody, if you're, if you were Boeing, poor Boeing, to build the Dreamliner, have something like 50,000 different subcontractors. Each one of them has sub-subcontractors. And so if you want a system that allows you to see everything as it happens, and nobody can fumble with it and nobody can hack it,'cause you want to know whether things are actually delivered or not delivered on time, etc. Just that one use case is very, very, very big. So tens of thousands in my estimation is actually understatement.

Mrinal:

Yeah, I agree. It might not even be enough. What I meant to say was"at least," right? Like yeah, it might not be enough. My view... like hopefully what happens though is, batching systems get efficient enough that, they almost have to, because at the end of the day, blockchain will always be a more expensive technology than a regular database or a regular AWS. Just because you're getting a ton of security because you're having a lot of different computers run the same thing. And so just by its underlying structure, of course we can make this more efficient, but just by its underlying structure, they'll always be, I don't know what that number is, something-x of the cost of deploying a database, or something like that. And so, some level of batching will be required. But I agree, hopefully we get to a state where tens, or hundreds of thousands of sufficient, but I agree with you, if you were to like decompose everything right down to the atomic unit of a read or write, those numbers could be, again, orders of magnitude lower than what people want to propagate onto a decentralized store of data and value.

Richard:

And notice that this floor that Uri is providing to the TPS for Ethereum to really be considered scalable. That's sort of working against your argument though, right?

Uri:

So I wouldn't consider it working against my case. I would argue with that this is the end game. This is 10 years from now, if not if not 15 as Mrinal pointed out. Um, so the question is pretty much what's going to be... we have seen Ethereum grow right in an exponential way and nothing grows forever in an exponential way. It just grows too fast. But we have seen, the grass root movement and people making and building stuff and it's terrific. The question is can Ethereum 1.0 scale. So we are always ahead of the curve off of the need. Okay. So, when we worked with the miners and they increase it by 25%, that was immediately eaten up because the demand for transaction is significantly higher than just 25% more than the current capacity. But can we take, can Ethereum 1.0 be scaled by 10x right now, short answer is yes. We deployed on the Ethereum network, worked with Bloxroute, with the current state, etc, we were able to see, even we were just half as fast as we are today, we saw in the hundreds of transactions per second. Are there going to be challenges moving from that to the next-x? Definitely. But these challenges a lot of the times has to do with how the code is written. And that's true for almost all the blockchains out there. They're operating such a small scale. So they are not written in order to operate at... the operations are just too slow. They're not written to operate at the thousands of transactions per second. Yet however people are writing. I hear Vitalik told me about one of the dev teams, I don't remember if it was iPegasus, or somebody else, very, very professional in how they're writing their code. And so, can Ethereum 1.0 scale, really goes to the question, can it give more capacity than it's being consumed and needed. And that is absolutely correct right now and it is absolutely or very, very, very likely that in a year from now the growth we see with capacity is going to go hand in hand with the demand. So yes, we're not ready for the end game. ETH 1.0 is nowhere ready for the endgame. But we have 10 years to get ready for it.

Richard:

Okay, perfect. So let's move on to round three with audience question, and the closing remarks. There's one question from an audience member named@youngbitness: The overcrowded nature of public chains these days will lead to inevitable consolidation in the next few years. How do you see M&A get played out in the space? So this is clearly not strictly related to ETH scalability, but it is a question from the audience. I'd love to hear answers from you both.

Mrinal:

So I agree with the insight. it's kind of overcrowded and M&A is kind of inevitable. This is how the software industry works in general, right? I mean, when people think of companies like Microsoft or Oracle, and I'm not saying that that's where blockchain is headed, but just, just to give historical context, they're not one company. Microsoft has acquired, I don't know, a few thousand. Cisco has acquired a few thousand companies. So M&A has just been a historical, truth of the software industry. And it does happen. I think M&A is going to happen. It's not really happened a lot in our industry. I mean those, that Bitcoin Dark and Komodo that merged. But I wouldn't call that true M&A, but expect to see a lot more. There's the added complexity where, the standard M&A, equity or debt process is very well understood. And it's something that I personally worked in for many years. However, when you add the community and token elements to it, there's that complexity. And so I do, I don't want to go into too much detail here. I mentally have a framework of how I see it working, but essentially very similar to how, equity companies have merged, private companies have merged. I see something very, very similar, happening here where, people take the value of all the stakeholders and all the assets and then figure out what the merged entity looks like, and figures out how to split it across the stakeholders. It's really, at its essence that. And as an overcrowded space,I see that happening a lot. I don't know whether it starts happening in a year or two, but there's some very well funded projects that have not good technology. There are some not so well funded projects that have excellent technology. And if you think about it, that combination would make so much sense, two of them just combined, because they, offset each others' foibles. And so I would expect it to start happening. As an industry it's fairly immature, and that's why we haven't seen it happen.

Uri:

So first of all, I would like to say, that I really like Mrinal's point. You have really well funded project with not the best technology and vice versa. And I think that's just, absolutely true and correct. The synergy there is just too great not to happen. But if you were thinking from a framework perspective, when I was asked about this like two years ago, how do I see the space progressing? My take was, which is not the take I have today, but my take at the time was that, blockchain and crypto has multiple value propositions, right? You have programmable money, you have uncensorable money, you have hard money, right? Bitcoin has hard money in a way that ETH doesn't have, and ETH has programmable money that BTC doesn't have. You have private transitactions, right, if you think about Z-cash or if you think about Monero. Then if you could, my take was that you'll see, depending on how many value propositions are out there, you will see multiple, but not a thousand, not a hundred, like three or five or seven. Right? If you could incorporate Z-cash or Monero into one of the others, then they shouldn't exist and they won't exist and vice versa. So that was my take at the time. However, spending some time in the real world blockchain space, I think regionality plays a significant role that people are missing. And so even though if you have a project like Ethereum, and you have a project like Ontology in China, you might think, well they are different but there are a lot of synergies there, why would they both exist? There is a strong community in China and which is different from the community in the US, and the resources that are available to them and you see them creating different ecosystems. And I'm saying China just as an example, it's not that all Asia is the same. Japan is very different from Korea, which is very different from China. So my guess is that we're going to see consolidation but not two, five, seven blockchains but I don't know, 20 or 30. You'll see blockchain serving South America and Europe. You'll see one for the Arab world, you'll see one for the like the US, Canada, English speaking countries. You'll see definitely several projects in Asia. Some of them working with governments, some of them open, some of them all sorts of hybrids. And so I see the acquisition and merger coming and this is happening. I think the current crisis is actually going to accelerate that significantly. So we're definitely going to see all sorts of projects running out of money, and not being able to raise enough funds. And that makes for easy acquisition options for other companies which are better funded that could use that technology. But eventually I think, and I don't know that, I don't know that better than anybody else, that's just my opinion, but we'll see this consolidation to a small number of blockchains but, not five or seven. We'll see a bit more than that.

Mrinal:

The way I've been thinking about it is, M&A in software has usually been global. Whereas M&A in the banking industry has been very, very hyper regional, right? Like European banks acquire other European banks, same in Asia. Actually almost at the individual country level. And given blockchain is kind of the intersection of both these things, it's hard to predict how global or regional it is. And I think you're right. Like, we'll see some bifurcation there. What exactly that looks like is impossible to predict. I don't have a crystal ball. But in general I've move the points. But I just thought it was interesting that you talked about geographic disparities because we have seen that be a driver for regionally limited M&A especially in banks, etc, historically.

Richard:

Great. I would love to invite you guys to offer your closing remarks now, starting with Mrinal. Please go ahead, Mrinal.

Mrinal:

So my closing remarks are, I think ETH 1.0 can be vastly improved by layer-zero and layer-two solutions. But the reason why I say it can't scale is because I'm talking about the end game. We need a system that is extremely efficient, extremely parallel, concurrent and allows its security to scale with the value of the network. And I think that is much more ETH 2.0 or 3.0. So while I think ETH 1.0 can't scale, I would also agree that that isn't a reason to not let it scale in the interim, because that's the way technology evolves. So I kind of stand with my status that, I don't think ETH 1.0 will scale, in its current form, to be a solution that can really be, a worldwide, fully used blockchain, but pretty confident, ETH 2.0, 3.0, if it gets there. That being said, I will say that, the optimizations at the layer-zero level are extremely important, and will be a huge driving factor both for ETH 1.0 and other blockchains as you know[inaudible]. But overall, more than the debate, etc, etc, I just wanted to add that it was really enjoyable to speak with Uri because I think, most things we agree more than more than disagree. And it was a fascinating and intellectually stimulating debate, which I think, hopefully the audience feels that as well. And that's the whole point of debate. Hopefully, we all learn something out of it and I certainly did.

Richard:

Great. Please go ahead Uri.

Uri:

Ethereum 1.0 can scale right now. It can scale by a multiplier, like an order of magnitude. Can it be designed better? For sure. Definitely. Can ETH 2.0 and 3.0 and other blockchains improve on it or be competitive? All that is absolutely true. But whether it can scale right now, the answer is definitely yes. And can it continue to scale? I don't see any reason that it can't. So while you might not reach the end game with the current design, however you can make progressive updates to it. It's kind of like, it's solving a problem that nobody's facing right now. How do you know that this is what you need versus some other aspect of the network or other aspect of the consensus which we are not considering? Ethereum 1.0 can scale because we saw that, and I have yet to see evidence that even going forward it cannot be changed, and like yo, you need POS, you need to change the consensus. No, no, no. Hold on. I think, let's see a point that it doesn't scale further, and then decide whether we, do you really need 2.0 or 3.0, or you need a new blockchain. But that being said, I agree that me and Mrinal did a terrible job in debating with one another, because we did most of the agreement but I think it was really educational and enlightening. So I appreciate the opportunity as well.

Richard:

Great. Uri and Mrinal, it's been an absolute delight to have you come on the show and debate. I've learned a lot and I think you've added new ways of thinking to our listeners' existing perspectives on Ethereum 1.0 scalability. It will be interesting to see how things unfold from here. Lastly, how can our listeners find you?

Mrinal:

So I don't use Twitter, so if you see my name on there, that's not me. Don't respond to that person. If you want to find out more about our project, just go to CasperLabs.io. And we have all our technical documentation and we have links to our github. We're an open source project. You can download all the code today. You can look at every single line of code we have out there. If you want to look at it, use it, whatever. It's all out there for you, but if you want to interact with the team, the best starting point is telegram: t.me/casperlabs. Myself and lots of members of the team try to be as responsive as we can while we're within the channel. But if you want to reach out to me directly as well, just find me in the CasperLabs channel on Telegram.

Richard:

Perfect. Uri?

Uri:

If you wonder why we're were called bloxroute, it's because we route blocks. We spell it Bloxroute. You can look for Uri Bloxroute, you can look for Uri blockchain on Twitter, on LinkedIn, bloxroute.com.

Richard:

Okay, thank you. So listeners, we would love to hear from you and to have you join the debate via Twitter. Definitely vote in the post debate poll and also feel free to leave your comments and say hi. We look forward to seeing you in future episodes of The Blockchain Debate Podcast. Consensus optional, proof of thought required. Thank you guys. Goodbye.

Mrinal:

Thank you.

Uri:

Thank you, that was great!

Richard:

Thanks again to Uri and Mrinal for joining the show. The layer-zero solution to scale Ethereum 1.0 is quite interesting, and it sounds like the friction of adoption is also quite low. After the debate, I learned that the related product is already live and being used by large Ethereum mining pools. And to follow up with Mrinal's point: if scalability can be broadly interpreted to include dev adoption, then embracing open programming standards and supporting more widely known languages outside of crypto would be key, which is what they're doing at CasperLabs. What was your takeaway from the debate? Don't forget to vote in our post-debate Twitter poll. This will be live for a few days after the release of this episode. And feel free to say hi or post feedback for our show on Twitter. If you like the show, don't hesitate to give us five stars on iTunes or wherever you listen to this. And be sure to check out our other episodes with a variety of debate topics: Bitcoin's store value status, tokenization and smart contracts, DeFi, Bitcoin halvening, China's future in blockchain, POW vs POS, oligopoly vs multiplicity of public blockchains, and permissioned vs permissionless blockchain for enterprise usage. Thanks for joining us on the debate today. I'm your host Richard Yan, and my Twitter is@gentso09. Our show's Twitter is@blockdebate. See you at our next debate!