Evan Shapiro (@evanashapiro)
Anatoly Yakovenko (@aeyakovenko)
Richard Yan (@gentso09)
Today’s motion is “Today’s blockchains can’t increase TPS without taking a hit on decentralization.”
This is a follow-up debate, or you can think of it as a re-match. Previously Emre from O(1) Labs also debated Anatoly from Solana on this very topic on the show. So make sure to check that out if you’re interested.
Here are some of the topics we covered:
* the inherent shortcoming of proof-of-stake in guaranteeing the canonical chain for a new full node
* why some chains have been designed to disallow rollback beyond certain point
* How Evan thinks faster synching process for new full nodes will allow further decentralization
* Why Anatoly thinks trustless synching doesn’t solve the Byzantine Generals Problem
If you’re into crypto and like to hear two sides of the story, be sure to also check out our previous episodes. We’ve featured some of the best known thinkers in the crypto space.
If you would like to debate or want to nominate someone, please DM me at @blockdebate on Twitter.
Please note that nothing in our podcast should be construed as financial advice.
Source of select items discussed in the debate (and supplemental material):
Evan is CEO of O(1) labs which operates Mina protocol, previously known as Coda protocol. He used to be an engineer at Mozilla and Personal Robotics Lab at Carnegie Mellon University.
Anatoly is founder and CEO of Solana, a layer-1 public blockchain built for scalability without sacrificing decentralization or security, and in particular, without sharding. He was previously a software engineer at Dropbox, Mesosphere and Qualcomm.
TPS Debate (Re-match)
Richard: [00:00:00] welcome to another episode of The Blockchain Debate Podcast, where consensus is optional, but proof of thought is required. I'm your host, Richard Yan. Today's motion is: "Today's Blockchains Can't Increase TPS Without Taking A Hit On Decentralization: Part Two". This is a follow-up debate, or you can think of it as a rematch. Previously Emre from O (1) Labs also debated Anatoly from Solana on this very topic on the show. So make sure to check that out if you're interested.
[00:00:36] Here are the things we discussed: the inherent shortcoming of the proof of stake in guaranteeing the canonical chain for a new full node, why some chains have been designed to disallow rollback beyond a certain point, how Evan thinks a faster sinking process for new full nodes will allow further decentralization, why Anatoly thinks trustless sinking doesn't solve The Generals' Problem, and more.
[00:01:00] If you're into crypto and like to hear two sides of the story, be sure to also check out our previous episodes. We feature some of the best known thinkers in the crypto space.
[00:01:07] If you would like to debate or want to nominate someone, please DM me @blockdebate on Twitter. Please note that nothing in our podcast should be construed as financial advice. I hope you enjoy listening to this debate. Let's dive right in.
[00:01:22]Welcome to the debate. Consensus optional, proof of thought required. I'm your host, Richard Yan. Today's motion: Today's Blockchains Can't Increase TPS Without Taking A Hit On Decentralization. To my metaphorical left is Evan Shapiro arguing for the motion. He agrees that today's blockchains can't increase TPS without taking a hit on decentralization. To my metaphorical right is Anatoly Yakovenko, arguing against the motion. He disagrees that today's blockchains can't increase TPS without taking a hit on decentralization. That is basically saying blockchains can scale without sacrificing decentralization. Now, this is a rematch a few months ago, Emre Tekişalp from Mina, which was called Coda at the time, had a debate on this topic with Anatoly. I look forward to seeing updates in their thinking in today's discussion. Gentlemen, I'm super excited to have you join the show. Welcome.
[00:02:15]Evan: [00:02:15] Thanks for having us.
[00:02:15] Anatoly: [00:02:15] Thank you.
[00:02:16]Richard: [00:02:16] Here's a bio for the two debaters. Evan is the CEO of O (1) Labs, which operates Mina Protocol previously known as Coda Protocol. He used to be an engineer at Mozilla and at the Personal Robotics Lab at Carnegie Mellon University. Anatoly is founder and CEO of Solana, a layer one public blockchain built for scalability without sacrificing decentralization or security, and in particular, without sharding. He was previously a software engineer at Dropbox, Mesosphere, and Qualcomm We normally have a few rounds, there's also an opening statement followed by host questions.
[00:02:50] Currently our Twitter poll shows that 70% agree with the motion and 20% disagree with the motion. After the release of this recording, we'll also have a post-debate poll. Between the two posts, the debater with a bigger percentage change in his or her favor wins the debate. Evan, please go ahead and get started with your opening statement.
[00:03:09]Evan: [00:03:09] When you have a blockchain, it's extremely important that everyone's able to access it and that it's not going to be possible to censor people, it's not going to be possible to limit who can access the chain. It's not going to be possible to treat people differently depending on, who they are, how they're accessing it and such.
[00:03:28] And it becomes much harder to provide those kinds of guarantees as you increase the throughput of today's blockchains. As you increase the throughput, the cost to a user of actually connecting to the chain, verifying it, and knowing they're using it in a trustless safe way, increases as you have to look at the actual transactions that were underlying the chain and understand, that they're correct, understand where that leaves you in terms of the state of the blockchain. And what this means is that, blockchains today have become pretty hard to use. If you look at blockchains like Bitcoin and Ethereum, which have pretty low throughputs, it's still hundreds of gigabytes to run a node and connect to the network.
[00:04:08]That is something that is feasible if you're a, a programmer with a server and access to a cloud machine or your own desktop, but it's something that's already out of reach of regular people and it becomes magnified as you increase throughput, even more to limit the set of folks that can connect to cryptocurrencies.
[00:04:25] Richard: [00:04:25] Okay, great. Anatoly, please go ahead.
[00:04:28] Anatoly: [00:04:29] So I guess fundamentally, what decentralization means to me is a solution to The Generals' Problem. What we're building is solving a hard computer science problem. Fundamentally that is building a censorship resistant network that guarantees that messages can be delivered between all the Generals and they can decide to attack Constantinople. In my mind, the only way to solve that problem is to increase the size of the network to such a high degree of nodes that participate in consensus, that the possibility of disrupting it approaches zero. The only way to do that is to increase throughput and increase TPS. My stance is that you can't solve this problem without scaling the fundamental core TPS part of a blockchain.
[00:05:23] Richard: [00:05:24] A quick follow up question to both of you. Anatoly, to your point of having to increase TPS in order to enhance the network and enforce decentralization, I think that totally makes sense. Are you able to speak about the approach being undertaken at Solana, in hopes of increasing that TPS, which subsequently increases decentralization?
[00:05:51] Anatoly: [00:05:52] The way we've undertaken this is when you look at the actual network and the number of nodes that participate in consensus, those messages that are votes, that actually guarantee safety and liveness, they're transactions. There's no way to increase that set without increasing the capacity of the network to handle a higher TPS. No matter what you do, no matter how you try to hide that fact, when you're trying to synchronize a large enough network, if you try to shard it, you actually reduce that set. If you try to aggregate those messages, you create eclipse attack vectors. There's no way to do this without actually creating a bigger pipe and maximizing that set of nodes. Imagine a network of 10,000 machines equally staked, you need at least 3,334 machines to be compromised for those Byzantine fault tolerant guarantees to be broken. The only way to guarantee that 10,000 set is participating in consensus is to increase the capacity of that network for cryptographic operations.
[00:06:57]Richard: [00:06:57] Okay. I actually don't think Evan necessarily disagrees with what you're saying there, but the crux of the issue today, the difference of opinion today, is whether blockchains that are designed today are able to do what's being described. Evan, are you able to speak to why you believe that today's blockchains aren't able to do what Anatoly said?
[00:07:21] Evan: [00:07:21] Yes, and I think this is interesting because it really gets into the design decisions that blockchains have been making and are making now. I would actually maybe disagree slightly on the claim that we have to increase throughput immensely in order to perform consensus. I think that depends on the consensus algorithm that is running under the hood. If it's something like Byzantine, if it's a traditional BFT algorithm that requires the two thirds or one-third honest to property - yes, you need everyone to communicate all the time and it requires you to increase your throughput a lot if we're going to increase the number of nodes in the network. But there's other options out there as well. For example, in proof of work from a consensus standpoint, there's nothing throughput wise you would imagine at first, like stopping you from being able to run with a huge number of nodes because every slot is only one block. You're pretty good there. The problem is as you start increasing the number of transactions that are besides your consensus transactions, when you start to use your transactions on. Because if you're going to use the blockchain, and the blockchain has been around for a year and you have to validate all the transactions that have been around for that year. Which means that depending how many transactions have been made, it's going to be more or less work for you as the user.
[00:08:29]Richard: [00:08:29] Okay. But what you're describing as a drawback there isn't a TPS problem though. I think what you're talking about is a synching problem, if you will. If something getting on the network quite late or something has been disconnected from the network and comes back, then it needs to basically download and then validate these transactions one by one. It takes a really long time. I believe for Bitcoin, it's on the order of five or six hours on a certain set of hardware requirements. I get that this is a problem that Mina Protocol is looking to address, but that's not a TPS problem we're talking about today. Is it?
[00:09:13]Evan: [00:09:13] We think it is. The thing I keep coming back to is synching and I think t hat's the reason why you get the trustless access from when you sync. With today's blockchains as you increase the throughput, it becomes harder to sync and therefore harder to get this trustless access, so they're connected concepts.
[00:09:29] Anatoly: [00:09:30] And this is where I think Evan and I disagree, but I think their approach is totally interesting in building a chain that solves that problem. The reason I disagree with him is that I don't think trustless synching solves The General's Problem.
[00:09:48]In the way that proof of work solves it, but not in a very efficient way. So if you look at the Ethereum and Bitcoin today, mining is concentrated around a few mining pools and there's about three nodes in the Ethereum that you need to control the signing, like keys for those blocks, to be able to prevent messages from passing through the network. If I have my armies surrounding Constantinople and I use a Etherium as my message bus, Constantinople needs to bribe three miners to prevent my messages from getting across. That is kind of a very simple thing to think about - treat these networks as an implementation of a censorship resistant message layer, right? Just a simple message bus and the number of clients that you can synchronize and guarantee that Byzantine fault tolerant communication, right on top of these networks that's kind of the limiting factor here. The attack vector is simply bribing those three nodes or is somewhat economically infeasible to do a very large rollback, but it's not economically infeasible to bribe the Ethereum miners and prefer certain USDT transfers between Binance and Coinbase versus others. Because nobody will notice t hat one particular hedge fund gets their transfers across before everyone else.
[00:11:17]Evan: [00:11:17] I agree with this actually There's things there's a couple of measures of decentralization, but for the, actual core Byzantine General as part of this? There's a lot to be gained from both existing cryptocurrencies and there are fronts besides TPS for decentralization to keep TPS versus centralization you have to make progress on.
[00:11:35]Richard: [00:11:35] L et's go back a little bit. The reason why we had this debate in the first place was a Mina article that touched upon this concept of TPS per decentralization. Evan, are you able to elaborate for us how that number is calculated?
[00:11:56] Evan: [00:11:56] Totally. We were spending time thinking through the difficulty of synching on different networks. And we realized, why don't we just go out and measuree how many full nodes there are in a network, which is for us a proxy of how hard it is to run a full node, to the throughput that the network supports. If you draw this, you get a a pretty straightforward relationship, which between the two, as throughput goes up, the number of full nodes you observe on a network go down. If you look at something like Bitcoin and Ethereum, the two chains that have been keeping their gasser, their block size, pretty low, they have a pretty high number of full nodes. It's been either flat or declining, which we would argue is because the chain is still getting longer, but they have around 10,000 full nodes. But then when you start looking at chains that support higher throughputs, you start seeing chains that have, as low as the tens, which feel like EOS or a Tron, up to like hundreds or, mid-hundreds for chain with middle throughput. This led us to developing this concept of decentralization versus scale, which we think is pretty observable, in the wild, if you just look at blockchains and we're hoping, can break out of by removing this connection between having to grow this ev rgrowing blockchain as you increase the throughput of your chain.
[00:13:10] Anatoly: [00:13:10] Interestingly enough, for us, our validator numbers have been doubling. On testnet, it's near 600, like 570, and the mainnet it's 370. That's because nodes, when they join the network, don't sync the whole thing. They actually just look at whatever last snapshot that's been finalized. If you look at any proof of stake network or a lot of the proof of work networks outside of Bitcoin and Ethereum, they make these weak subjectivity assumptions about the last block that can be rolled back by any client. If you make those assumptions, there's actually no point for consensus to store that history. It's not going to change the consensus mechanisms of any of the nodes. They'll in fact, continue running, regardless whether they see those blocks that have rolled back, they'll simply reject them. You don't actually need to sync anything else besides that last week's objective checkpoint.
[00:14:10]If you are building a proof of stake network for the majority of the world, this is what everyone has designed. That ETH 2, like EOS version, every TRON version, every COSMOS fork, has some notion of "we'll never roll back past a certain point".
[00:14:28]So that kind of synching doesn't really like impact those kinds of weakly subjective networks. and I think what folks haven't really talked about and I think is that, potentially an insight that isn't obvious, is that I think if you look at these networks, not the non-proof work ones, what you have is a Verisigned certificate chain, a very large one with a lot of signers and this very long one.
[00:14:55]Just like in Verisign, when you enter a certificate, you verify the root through some out of band needs, you go in and actually look around and make this weak-subjective kind of computation. "Hey, what is the actual network that I care about?", "what is that instance? , "how do I connect to it?" Once you establish that chain, the protocol ensures that it won't be broken. That is in fact, the only thing that a proof of stake network needs to do, I boot up my node, I discover the network, I connect to it, the protocol or the software guarantees that assumption won't be broken, and then out of band, I verify, Hey, am I on the network? That me as a human actually care about? If that's true, right? Then I can get my armies and communicate over this network and attack Constantinople.
[00:15:44] Richard: [00:15:44] Anatoly, in terms of a new full node that gets connected to the network. Your argument seems to be that the full node doesn't need to download the entire history, it just needs to go back to a certain point. What reference does that full node have for that? How does it know where to get the latest copy and at what point does it need to sync up to?
[00:16:07] Anatoly: [00:16:07] That Delta of how much history is lost, is defined really by the network protocol. There's some threshold which is this is the maximum slashing distance, right? ETH too I think it's 90 days. For consensus, it doesn't really matter if you have history that's older than 90 days, because there's nothing you can do with it to impact consensus. It becomes useless data with proof of work, right? If you have an alternative fork, that's 90 days old, you can start doing work on it and if you end up building a heavier of chain, it'll actually change the true proof of work. The objective thing is that if you build a heavier fork, no matter how old it is, the entire network will switch over to it.
[00:16:48]With proof of stake, you have a cutoff. This is fundamentally, I think to me, there's almost two different ways to do censorship resistance. ETH classic gets a 4,000 block rollback. That is a demonstration of proof of work, censorship resistance, because it guarantees that eventually the heaviest fork will win. But that is different than guaranteeing that when I send a message right now, that I have some guarantees about it reaching my armies within a short amount of time, or that my hedge fund gets a chance for funds between USDT and Binance, with some fair assumptions. So there's, to me, it seems like there's a Pareto efficient space here for trade-offs for work, real classic proof of work, is one and in our approach, we're really maximizing this set of nodes that can participate in consensus. Then therefore maximizing the minimum set that have the control over the liveness threshold. Effectively, that is the set that decides whether any block is going to be accepted or rejected by the network. If you maximize the set, you effectively create a more robust censorship resistant message bus and therefore that is the true meaning of decentralization.
[00:18:07] Evan: [00:18:07] I actually want to take this back to synching and respond to Analtoly's previous comments. Maybe we can use this mental model if it is helpful . If you are someone who's watching this battle happening and you want to actually observe what's happening on the network and you want to join in and see, what the state of the world is, then you need some way to synchronize to this thing and actually know you're talking to the right set of Generals that have been around since the time started and that they're actually doing what they're supposed to be doing. I think what's really exciting about proof of stake, and I think, what Anatoly was getting into, there is a really wide design space that we can choose between. We can go as far as throw out all history if we want to so we don't have to think about that. But I think the right question is what set of assumptions is this new person having to make if they're going to join the network.
[00:19:00] I think this is really key because this is what, allows you to make the decision whether or not you're going to have to care about this throughput problem or not. Cause if you want to you can throw away the history and you can say that I'm going to choose a trust model with which I don't have to look at the whole history and that's a very different trust assumption if you want. But if you want to, you can also say that I will make no such assumption about, trust in the current state of the network, I want to actually see a proof that the network's been operating correctly since the beginning of time. We really want to maximize decentralization for our users to have to figure out if they're on right recent part of the chain or not, they should be able to just join from Genesis. I think this is key because even if you are sending these cryptographic messages around and have a really efficient protocol for sending them around on, what's important, is that you're sending them around to the real set of Generals.
[00:19:50] Anatoly: [00:19:50] I think that if you are building a proof of stake network, it is impossible to guarantee who the real set of Generals are because you can always lose old keys and somebody can create an alternative fork that has passed whatever rollback history you started with.
[00:20:07] Evan: [00:20:07] This is the long range attack. Yes. I think the assumption you have to make here if you want to be secure is you have to make the assumption that at no point in time , and let me know if this is what you are referring to, at no point in time has any party had more than 51% of the stake at any time? Cause if they did, they can go back in time and create this really long fork. Is that the case?
[00:20:32]Anatoly: [00:20:32] Those keys could've been stolen. Those keys could have been leaked. Simply somebody could offer a market for hold staking keys once they're rotated, they lose value. I can simply buy those private keys and eventually an attacker could accumulate enough old keys from previous history and then can create alternative forks and the only way to deal with that is with some weakly subjective assumptions. M y personal belief is once you start taking those assumptions one inch, you're not losing any security by using those assumptions strictly, by having to validate those assumptions anyways.
[00:21:12]So as soon as I connect to the network, for me to actually know whether I'm connected to the real network, any proof of stake network, I have to validate that the signers in this network are what I expect. So how do I do that? It depends on the application and what you care about from the network itself. If all you care about is value transfer, then you can literally go to Binance and make sure that you can transfer from Binance to your account and back and told the exchanges that you care about. Then it doesn't matter if it's a long range attack because the financial institutions that you're connected to, your Generals are connected to, are all in the same long range attack network. Your nodes guarantee that connection remains secure. To me, there's no way around this, except for Nakamoto proof of work.
[00:22:04] Evan: [00:22:04] There's actually a really interesting metaphor here with proof of work that I think is somewhat relevant and I think is really fascinating. If you look at the current hash rate on Bitcoin right now, it's very high, but if you look at all the hardware that has been created since the beginning of time for Bitcoin, it's much higher. So if someone was to go out there and get a truck or probably an army of trucks and collect all these old machines, they could perform 51% attack, pretty feasibly. This is like the hardware version of the stealing the keys attack. I think the metaphor holds. What I think is cool with proof of stake is that there are schemes by which honest nodes can effectively destroy their old keys and replace them with new ones on a periodic basis. This is similar to iminers, whenever they upgrade their basics, had to smash all their all day six or something, probably not going to happen, but, because it's software, it's easy to program in by default.
[00:22:58]I do think a big question here is whether or not. if once it's a system exists, when people actually go in and start disabling that so they can save their old keys. So eventually if they want to defect, they can. But I think we have a little more wiggle room in creating a little more possibility of safety under this paradigm, where we can make it pretty hard for an attacker to get the stake necessary.
[00:23:19]I think Anatoly: [00:23:21] while an attack and proof of work can destroy the value of that coin, it doesn't actually destroy the network, right? The Ethereum classic rollback with 4,000 blocks added security to the network. So even if somebody takes all the hardware and 51% attacks Bitcoin, the assumptions of proof of work or kind of that is the actual network and what it did is it overcame censorship. That may be useless for an application or for humans, but that is the fundamental thing about proof of work and that kind of Nakamoto style consensus is that the older my coins are, the more security of the network accumulates and that possibility approaches zero. You can't replicate that with proof of stake. You can make the safety improvements that you talked about, which is automatically destroy keys and it makes it really hard for an attacker to go buy them up, but that's not something that you can guarantee forever.
[00:24:23]And I personally think that the fact that those attacks can happen to proof of work makes the chain more secure, but pretty useless for humans. So it's broken from that perspective.
[00:24:34] Evan: [00:24:34] Yeah, I agree.
[00:24:36]Richard: [00:24:36] One of the updates from the last debate was this idea of whether full nodes are as important as consensus nodes, and I remember when Anatoly was debating Emre last time, there was a long discussion of whether full nodes were relevant in resolving disputes in the ledger. Any updated thoughts on this Anatoly?
[00:24:58]I think Anatoly: [00:24:59] the way we've been framing decentralization is in maximizing this set of nodes that can halt the network. I don't think full nodes are important, because effectively this is the hack that we're doing, right? Nakamoto chose this hack where any heaviest fork wins and that breaks a lot of applications when that happens, but increases the security of the network.
[00:25:22]The hack that we're taking is, you maximize the set, which means that it becomes really hard to halt the network or sensor it and the other nodes that are participating that are downloading the ledger are basically RPC nodes. They're there to scale reads like scale, like inference, stuff like that, but they don't count for consensus. They don't count for censorship resistance. To me they're useful when you're counting when people make these charts of who has more nodes, those nodes are as useful as GitHub stars. They are a sign of adoption because people go and run them because they're not actually there to help those end Generals to attack that aren't actually count.
[00:26:06] Evan: [00:26:06] My take is that the number of the nodes that exist is more of an indication of something. rather than a huge win in and of itself. The thing that I think is important here is, we have I guess this notion of what is the halting set of the network? How big can we make that? And I think that's one notion of decentralization, but I think there's another half of the coin, which is how hard is it as a user to get access to this system? How hard is it for me to either join this set of consensus nodes or just perform a transaction without being beholden to some exchange or some other entity.
[00:26:37] Number of nodes isn't necessarily an indicator of how easy it is, but it it does go some way towards showing how easy it is. I think that's what really matters. As a user, if I want to access some decentralized app, is that going to be something that requires a desktop with a lot of time or it's something that I can do on my phone?
[00:26:54] Richard: [00:26:54] Okay. interesting. It sounds like the position on full nodes has really evolved on you guys and Evan. Anatoly, just to follow up on something else you mentioned last time with regard to centralized exchanges dictating the way the ledger looks. Since our last discussion, there's been lots of FUD for exchanges. OkeX, Kucoin, BitMEX, Binance getting regulatory heat too. So is the movement toward a self custody world being accelerated, and if so, are full nodes getting more important, more quickly as well.
[00:27:31] Anatoly: [00:27:32] I would like that to be true, you saw Ethereum went down and a bunch of Ethereum applications basically, and the exchange is connected to Ethereum stopped withdrawals. so I think humans are just remarkably lazy. This is where I will concede to Mina and I think what they're building is really cool. If you can make the shelling point running a full node so low that you'd rather do that than run Infura, then you end up with networks that have social resilience, where if the set of validators is small and you may not have the same liveliness properties from, let's say Solana where this is the only thing we're focused on that if that set is corrupted, everybody immediately knows, right? Everybody immediately knows that something went wrong.
[00:28:21]There's something interesting there when you can have, when you maximize the watchers, We're trying to maximize the watchman, and I think there's a different approach where you can have a small set of consensus nodes and a very large set of observers. and that has some interesting properties in itself.
[00:28:38] I don't know what are what is like the trends but I think reality is that humans are pretty lazy and they will just do whatever works. If you can make whatever works have high security guarantees for the most part, that may allow you to scale that number to high security margins in a cheaper way.
[00:28:56]Okay. Richard: [00:28:59] This question is for Anatoly again, in our last debate, you said you were okay with light clients and SPV clients being the norm for how users access the network. Two follow up questions there. First of all, what is the minimum requirement for running a full node and consensus node in Solana? Give us a dollar price. Secondly, do you have any theory on the acceptable threshold of the number of light SPV clients versus the number of full nodes for a network to be considered secure?
[00:29:26]Anatoly: [00:29:26] So again, from our design perspective, what we're focusing on is maximizing that set of participants that can hold the network. It doesn't matter how many clients there are or aren't, because that doesn't impact that key core feature and our goal there, is to get it to a level to where there's some really good certainties about that level of censorship persistence. I think once we get to 10,000 validators that are with equal stake distribution so it's not just full nodes, it's how the stake is distributed. That the minimum set of nodes that can help the network is in these thousands.
[00:30:01] If you get to a point where you can almost treat it like the metavers e like this universal computer that's always there and you can plug into it whenever you want to. That's the goal, right? That will scale to clients that can just make really small assumptions about the network. They just need a cryptographic key and they can validate whoever's connected to this really quickly, with old school signature verification. You don't actually need to validate the headers. You just need to validate, "Hey, am I connected to the same network as the people that I care about" and that can be done, really quickly.
[00:30:39]If people want to understand the difference and how we're visualizing this is that I think the purpose of this network is to be this message bus for Generals. The crypto economics and all this other stuff is almost an implementation detail. But if you can solve that, the General's problem in a way that scales to any large number of users that want to use this in some way, maybe it's to build an ERC 20 token or a community or whatever, that we don't really care, but we need to solve this core problem so this is how we're thinking about it.
[00:31:14]And for us like the costs to run a validator, the capital costs are, the hardware, you can basically think about it in terms of load and capacity. You need about two or three cores per thousand TPS of load. If the current load is about a thousand TPS, you need at least a four core, four to eight core system with 32 gigs of RAM.
[00:31:40] But what we want is high capacity, right? Because high capacity is what sets the price per transaction, and the more hardware the better and that always gets better every two years. There's twice as much horsepower per dollar. So you can either get last year's hardware cheaper, or you can get your hardware at the same price and increase the capacity of the network and therefore reduce the price per transaction and increase the number of validators that can send those transactions for consensus. This is where Moore's law plays in our favor is that every two years, without changing anything to the protocol, to the fundamentals of how anything works, the network can basically support twice as many nodes connected to it and that's a exponential function that is really fast when all is said and done.
[00:32:35]Okay. If we were Richard: [00:32:37] to just do a cross comparison in terms of the cost of running consensus, a consensus node and the full node for the two networks, what kind of quantifiable measures such as dollar price can you give us ?
[00:32:50] Anatoly: [00:32:50] The main costs because Solana requires full, you have to observe everything that the network is doing, the cost is bandwidth or egress, and the egress costs all vary, depending on where you live, but typical co-location pricing is 50 bucks a terabyte. So if you take 50 bucks a terabyte and 10,000 validators, and you split that up, the cost per 128 byte message is about 10 ^-5 dollars. So 50 bucks a terabyte, right? Data's replicated by 10,000 validators so every one of these validators has to pay 50 bucks a terabyte of egress. So when I send my 128 byte message, it's replicated 10,000 times. I have to pay for that otherwise everybody's running at a loss, and the minimum fees for the user at that scale of decentralization end up to something like, 10^ -5 dollars per 128 byte message. 128 bytes is enough to, stuff a signature in there, a public key, and then some user data.
[00:33:53]Richard: [00:33:53] Okay. That's helpful. How about you, Evan?
[00:33:56]Evan: [00:33:56] Yeah, it's pretty cheap. We look at right now the base cost in running a node, there'll be similar transaction, throughput, bandwidth costs on top of that. But right now you have to basically get access to a four core machine and that's about it. I don't know. I guess it's like effectively free if you have a computer right now, or not on computer and hopefully we will get it down to working on phones and browsers at some point and I think it'll be really cheap to run a full node. For consensus, if we end up having support for higher bandwidth at some point, then we'll have to, pay for bandwidth also but that's pretty cheap as well.
[00:34:29] Anatoly: [00:34:29] So I think one thing that people often overlook is when they think about cheaper, expensive validators, is that consensus nodes are highly available systems. This is where, I think a lot of detractors of Solana they say, "oh, the network is too expensive to run", but reality is that it doesn't matter if ETH2 can run on a raspberry PI, it doesn't matter if these sharding, it doesn't matter if they rotate that set. The nodes that participate in consensus have to be high available systems, basically run up three nines. Which means that you can't use your phone you can't run it on a laptop that you blink in and out of at a cafe. There's no design right now that anyone's built where consensus can run on low availability hardware, outside of proof of work where you wake up your machine, it does some hashing, and gets lucky and produces a block. So effectively, if you're not doing that, if you're not doing Nakamoto dBFT, you're building a high availability network that runs on high availability nodes. Therefore, basically, they're all equivalent in costs. You're paying for egress and then how effectively can you use that egress? How effectively can you parallelize and scale and transmit? That's the difference between, a good video streaming service and a bad one, or a good CDN and a bad one. It's just kind of implementation details. The blood, sweat of engineering.
[00:35:56]Richard: [00:35:56] So, my understanding is that, the cost of operating a full node and cost of a consensus node, the main value proposition for Mina is that these numbers are supposed to be lower and that's why it's easier to scale, and basically have a higher amount of decentralization, but based on what you guys had just been saying, it doesn't sound like that cost advantage for Mina is that evident. Is that the case?
[00:36:23] Evan: [00:36:23] I would say it's pretty substantial actually. For running a full node, it's extremely cheap, both in terms of bandwidth and in terms of, compute. Running this on your phone, it's not going to happen on, really any existing network, because they don't have the zero-knowledge proof, which proves the whole history. You're going to have to do something.
[00:36:42]Anatoly: [00:36:42] That's not a consensus node, right? That's just a node, that's downloading the headers, like the ledger history.
[00:36:47]Evan: [00:36:47] Yeah, it's a little more than the editor since it's the whole history and approved down to like particular accounts. But yeah, so for full node, it's a lot cheaper, but for consensus it's a little bit closer. For consensus it's probably just going to be pretty similar depending on bandwidth. So what I think is cool here is these are just two halves of this problem and I think that consensus nodes, I think we do gain, improved guarantees and this set of assumptions you have to make when joining the network, because you get this proof as a consensus node, but the actual costs of running it is basically just also going to be bandwidth because you do have to be connected and monitoring all the transactions. The added gain with Mina is access to these cheap full nodes. Whereas, the consensus nodes, you got to stream all the transactions. That's where it's at.
[00:37:27] Richard: [00:37:27] So it sounds like at least for the consensus nodes, the cost advantage is not very obvious. It's just when it comes to the full nodes, the difference is there.
[00:37:37] Evan: [00:37:37] Yeah, and we've chosen to do pretty low throughput at first, because we just want to optimize really hard on the decentralization features. Our cost of running because it is really low right now but I don't want it to, it's only because of the bandwidth cost is really low right now. If bandwidth was to be as high as something like Solano, then consensus nodes would be... you'd have to stream all that bandwidth so you have to deal with that, but full nodes would still be cheap to run.
[00:37:56] For us Anatoly: [00:37:57] the bandwidth comes from increasing the number of nodes participating in consensus and reducing their block time threat. So they're producing more signatures per second. Both of those things like increasing the number of nodes means that there's more signers and reducing the block times increases the number of messages, therefore egress costs, but those are all things that benefit users because more nodes and consensus means more larger set for that threshold, for that liveliness threshold - and lower block times means that faster confirmation times.
[00:38:32]Okay. Richard: [00:38:33] Coming back to the main theme of decentralization and the TPS here, what are some other areas in each other's platform do you see that can serve as inspiration for your own?
[00:38:48] Anatoly: [00:38:49] I can start. I think the zero knowledge approach to compressing the history is really cool. Where I think that has tremendous potential, tremendous impact outside of just Mina is the ability to build light clients that can verify that the remote chain followed some set of logic rules that the source chain can basically guarantee that whatever proofs you're getting, they actually followed the expected logic that the source chain depends on. That's really hard to do with other kinds of light clients. This fundamentally goes into this assumption about how do we identify which network is real. Us as humans, we can take this side channel approach where I boot up my node, it's connected and then I go check, "Hey, am I connected to the same node that I care about?" I ask Evan for his public key and we send a message between each other, ask Christian for your key and we all send a message between each other and we're like, "Oh, okay. We're all in the same network." But blockchains can't do that. A light client running in Ethereum or Salona can't actually go and validate this. So those assumptions have to be baked in and that is where, I think there's a ton of financial risk and incentives for the remote chain and collusion to break them. I think if you have effectively zero knowledge, roll-ups running inside all their remote chains, then you can quickly validate that logic is correct and that the signer signed it. Then if they have to be slashed, it's very easy to do so and cheap to do. To me, this is where I think this stuff can really create this internet of blockchains that Cosmos is working on. I think really depends on the improvements that Mina's working on.
[00:40:46] Richard: [00:40:46] Yeah and to be frank, it really doesn't sound like what Mina's working on is incompatible with the way you guys are doing things at Solana, right? So theoretically that technology can basically employ for your full nodes to verify history in an extremely efficient manner.
[00:41:04]Anatoly: [00:41:04] I think because zero knowledge programming, the bytecodes and kind of the virtual machines, they are so different than standard interpreter programs. I think it'd be pretty hard to build those circuits. Evan knows more about this, but maybe it's possible in the longterm, but we have effectively x86. Taking arbitrary programs that are running on x86 and compiling them into a zero knowledge circuit is tough. It's tough to do that at scale, or that doesn't require a ton of work. I don't know if it's possible in the general sense. But what's cool is that if we had Coda as a virtual machine that was shared between Solana and Etherum, then that is a VM that we can use as a value transfer and an interface layer right between these two networks that has these guarantees that the execution on both sides is doing what we're expecting to, or the signers that are validating that execution all need to be slashed. That signature is easy to prove and show and therefore an insurance model builds on top of this is simple for humans to validate and run, and reducing that cost of running that and validation and the relayers and just reducing the complexity there I think would really make those systems a lot more robust.
[00:42:25]Richard: [00:42:25] Okay, great. Evan, were you going to also talk about inspiration from the other side?
[00:42:31] Evan: [00:42:31] Totally. I can talk about the general compute stuff too, but let me talk about that first. Yeah, I think what Solano has done with both speed and finality is really impressive and like you were saying, these things are not mutually exclusive.
[00:42:43] I think there's a lot of theoretical and practical engineering, things that had to go into making such a very fast chain, there's definitely things to learn there. We've all adopted this 'we're just going to throw a linear set of transactions into a block, like Bitcoin did, and there's so much more flexibility and possibility for how you can set that up that really has major improvements. So I think that's the big learning for me.
[00:43:06] Richard: [00:43:07] Those are all the questions from our side. So any last word from either of you?
[00:43:11] Anatoly: [00:43:11] I'm excited of all the zero-knowledge work people are working on, especially the Mina folks because I love when people take these crazy risks, doing something that seems impossible or seemed to me as totally impossible, like a year ago now it seems like it's coming together. It's just really cool to watch just, it's just awesome. I wish you guys the best of luck.
[00:43:32]Evan: [00:43:32] Thanks. You too. I think for me, what I love about this is we're all talking to like decentralization, but decentralization has been this word that's gotten all these concepts stuffed into it. When you look at both our protocols, you can see how you can start breaking that apart and looking at the little pieces of what really goes into decentralization. I think it's true that you have to increase the throughput if you want people to actually really use this stuff - that's part of decentralization. And you need people that are able to get access to these things, if you want them to be able to use it in a trustless manner. I love breaking apart the word and really understanding what goes into it.
[00:44:04] Richard: [00:44:04] Okay. Great. thanks for joining the debate today, Anatoly and Evan, how can our listeners find both of you starting with Evan?
[00:44:11]Evan: [00:44:11] Yeah, you can find us on our Twitter. Just go to @MinaProtocol. If you want to, just contact me directly. Also my Twitter @evanashapiro. So you can contact us there.
[00:44:22] Anatoly: [00:44:22] Go to Solana.com and join our Discord and you can talk to myself and all the engineers working on Solana and a ton of our members of our community.
[00:44:33] Richard: [00:44:33] Thanks a lot to both. I learned a lot and listeners, we'd love to hear from you and to have you join the debate via Twitter. Definitely vote in the post-debate poll. Also feel free to join the conversation with your comments on Twitter. We look forward to seeing you in future episodes of the blockchain debate podcast. Consensus optional, proof of thought required. Anatoly, we look forward to having you come back and forth or fifth time.
[00:44:53]Anatoly: [00:44:53] Anytime this is a ton of fun.
[00:44:56] Richard: [00:44:56] Yeah. Perfect. Thank you guys.
[00:44:58]Evan: [00:44:58] Thanks.
[00:44:59] Awesome. Thank you.
[00:44:59] Thanks again to Evan and AnatolyRichard: [00:45:02] for coming on the show. The debate evolved into a bit of juxtaposition of the approaches by Mina and Solana. Both chains claim to be able to simultaneously advance TPS and decentralization. As far as I can tell, Mina's chain with super small footprint will allow participation from a huge network of full nodes, and Solana is focusing on the throughput of consensus nodes, doing meticulous engineering optimizations, and also riding on continual advancement of Moore's law.
[00:45:30]What was your takeaway from the debate? Don't forget to vote in our post debate Twitter poll. This will be live for a few days after the release of this episode and feel free to say hi or post feedback for our show on Twitter. If you liked the show, don't hesitate to give us five stars on iTunes or wherever you listen to this, and be sure to check out our other episodes with a variety of debate topics, Bitcoin store value status, the legitimacy of smart contracts, DeFi, POW versus POS, and so on.
[00:45:57] Thanks for joining us on the debate today. I'm your host Richard Yan, and my Twitter is @gentso09, G-E-N-T-S-O-zero-nine. Our show's Twitter is @blockdebate. See you at our next debate.