The Blockchain Debate Podcast

Motion: Security is about maximizing the minimum set of colluding miners (Anatoly Yakovenko vs. Dankrad Feist)

Richard Yan, Anatoly Yakovenko, Dankrad Feist Episode 28

Guests:

Anatoly Yakovenko (twitter.com/aeyakovenko)
Dankrad Feist (
twitter.com/dankrad)

Host:

Richard Yan (twitter.com/gentso09)


Today’s motion is “Security is about maximizing the minimum set of colluding miners.”

This is a mouthful. The minimum set of colluding miners is the smallest cartel of dishonest block producers you need to attack a network. Maximizing that set is about increasing the size of such a successful cartel, essentially making it harder for block producers to collude. Note this debate statement leaves out full nodes. And that’s the essence of this debate: Are they important in securing the network?

So, to get more context on this, take a look at a recent blogpost by Vitalik Buterin on limits to blockchain scalability. This article instigated the sparring between our guests on Twitter, and led to today’s debate. In his article, Vitalik argued that the ability for consensus nodes to collude and do bad things should be held in check by full nodes. And therefore, there’s a strong need for regular users to be able to run full nodes. 

Today’s debate is essentially an examination of the validity of that statement. Is security about maximizing the minimum set of colluding miners (aka increasing the smallest number of consensus nodes required to censor or collude), or should we also worry about making sure to onboard more full nodes?

The two debaters today are from Solana and ETH 2, respectively. When it comes to ensuring security of the network, they disagree on how important it is to make it easy to run full nodes.

The debate took a major detour. The two debaters were very passionate about their respective projects and went down the rabbit hole several times pointing out potential weaknesses they see in each other’s designs. I decided to keep all of that in, because one way or another, those discussions found their way back to the topic at hand.

If you’re into crypto and like to hear two sides of the story, be sure to also check out our previous episodes. We’ve featured some of the best known thinkers in the crypto space.

If you would like to debate or want to nominate someone, please DM me at @blockdebate on Twitter.

Please note that nothing in our podcast should be construed as financial advice.

Source of select items discussed in the debate (and supplemental material):



Guest bios:

Anatoly is founder and CEO of Solana, a layer-1 public blockchain built for scalability without sacrificing decentralization or security, and in particular, without sharding. He was previously a software engineer at Dropbox, Mesosphere and Qualcomm.

Dankrad Feist is a researcher at the Ethereum Foundation, working on ETH 2.0. He was previously an engineer for Palantir, and co-founded a healthcare startup named Cara Care.

Motion: Security is about maximizing the minimum set of colluding miners (Anatoly Yakovenko vs. Dankrad Feist)


Richard: [00:00:00] Welcome to another episode of the Blockchain Debate Podcast, where consensus is optional, but proof of thought is required. I'm your host Richard Yan. Today's motion is: "Security is about maximizing the minimum set of colluding miners." 

[00:00:21] This is a mouthful. The minimum set of coding miners is the smallest cartel of dishonest block producers that you need to attack the network. Maximizing that set is about increasing the size of such a successful cartel, essentially making it harder for block producers to collude. Note this debate statement leaves out full nodes, and that's the essence of this debate. Are they important in securing the network? 

[00:00:46] So, to get more context on this, take a look at a recent blog post by Vitalik Buterin on limits to blockchain scalability. This article was the source of the back and forth between our guests on Twitter. And that led to today's debate. In his article, Vitalik argued that the ability for consensus nodes to collude and do bad things should be held in check by full notes. And therefore there's a strong need for regular users to be able to run full nodes.

[00:01:16] Today's debate is essentially an examination of the validity of that statement. Is security about maximizing the minimum set of colluding miners, aka increasing the smallest number of consensus nodes required to censor or collude? Or should we also worry about making sure to onboard more full nodes? 

[00:01:35] The two debaters today are from Solana and ETH 2, respectively. When it comes to ensuring security of the network, they disagree on how important it is to make it easy to run full nodes. 

[00:01:46] The debate took a major detour in the middle. The two debaters were very passionate about their respective projects and went down the rabbit hole several times pointing out potential weaknesses they see in each other's designs. And I decided to keep all of that in because one way or another, those discussions found their way back to the topic at hand. 

[00:02:05] If you're into crypto and like to hear two sides of the story, be sure to also check out our previous episodes. We featured some of the best known thinkers in the crypto space.

[00:02:13] If you would like to debate or want to nominate someone, please DM me @blockdebate on Twitter. Please note that nothing in our podcast should be construed as financial advice. I hope you enjoy listening to this debate. Let's dive right in! 

[00:02:27] Welcome to the debate. Consensus optional, proof of thought required. I'm your host Richard Yan. Today's motion: Security is about maximizing the minimum set of colluding miners. Here are a few other ways to think about the debate: is the effort to maximize the Nakamoto coefficient alone sufficient in ensuring security of the network? Here, Nakamoto coefficient is defined as the number of block producers you need to corrupt and attack the network. Yet another way to think about the debate is do full nodes play a significant role in ensuring security of the network? Vitaly Buterin wrote an article about why it's important to get regular users to run full nodes. This will be linked in the show notes. This article is actually what got this debate started. 

[00:03:06]To my metaphorical left is Anatoly Yakovenko arguing for the motion. He agrees that security is about maximizing the minimum set of colluding miners. To my metaphorical right is Dankrad Feist arguing against the motion. He disagrees that security is about maximizing the minimum set of colluding miners 

[00:03:23] Let's quickly go through the bios of our guests. Anatoly is founder and CEO of Solana, a Layer-1 public blockchain built for scalability without sacrificing decentralization or security, and in particular, without sharding. He was previously a software engineer at Dropbox, Mesosphere, and Qualcomm. Dankrad Feist is a researcher at the Ethereum Foundation working on ETH 2.0. He was previously an engineer for Palantir and co-founded a healthcare startup named Cara Care. Welcome to the show, Anatoly and Dankrad.

[00:03:49] Anatoly: [00:03:49] Awesome to be here.

[00:03:50] Dankrad: [00:03:50] Yeah, thanks for having us. 

[00:03:52] Richard: [00:03:52] Great. We normally have three rounds: opening statements, close questions and audience questions. Currently our Twitter poll shows that 60% agree with the motion and 30% disagree with the motion. After the release of this recording, we'll also have a post debate poll. Between the two polls the debater with a bigger change in percentage of votes in their favor wins the debate. 

[00:04:12] Okay, let's get started. So Anatoly you are in the pro position so please tell us why you think that security is about maximizing the minimum set of colluding miners.

[00:04:23]Anatoly: [00:04:23] I think the way I look at this problem is, to give you a morbid analogy, is what are BFT systems for? Therefore, imagine you are USA and you're building a bunch of nuclear strike sensor arrays, they're for detecting that the opponent has launched an attack, and when you would design the system, you're thinking about what is the attacker going to do, right? Are they going to try to corrupt that sensor, right? You want to make it as hard as possible for them to do so. So your goal is to maximize the work that they have to do that. So you build a bunch of redundant arrays that have their own command and control, they're all independent, independently operating to minimize the chance of any of them colluding, right? Any of them take bribes and any of them being broken into. To the level that it takes much more work to do so. That minimum set to get to 33% is what's going to keep you safe or keep you safe enough to detect that first strike. And every additional independent node or validator that you add to that set makes it exponentially harder for that attack to pull off and how do you gauge that security or how do you gauge what is enough? I think that's still an open question, but I think at some gut level, when that number passes, a thousand or 10,000, I think us as humans will stop thinking about it as a possibility that's even remotely possible and just start taking it for granted that, hey, look, there's this nebulous, decentralized supercomputer that's incredibly secure and open and censorship resistant and it's the metaverse, what you imagine out of like William Gibson, Neuromancer when they plugged into the matrix.

[00:06:20] Richard: [00:06:20] Okay. Great. Dankrad, go ahead with your opening statement, please, and feel free to counter the points Anatoly has made.

[00:06:26]Dankrad: [00:06:26] Yeah, sure. So basically I'm going to go a little bit back to what actually are blockchains. So I would describe them we have a network that consists of peer to peer nodes, full nodes, that can verify transactions. And the nice thing is that the guarantees about who owns which coins, for example, are cryptographical. And cryptographic guarantee is just to give some numbers here means for example, like an 80 bit security, you can break with a few billion dollars probably.

[00:07:02]But we're nowadays aiming for 128 bit security. This is a quadrillion times harder to break than 80 bit securities. So that's a one with 15 zeros times these few billion dollars you need to break 80 bit security. So this is where the security in blockchain comes from.

[00:07:23] There's one problem that pure cryptography can't solve for you, and that is ordering the transaction. We needed an extra special role of people who add those functionalities and these are the miners or stakers. They provide crypto economic guarantees that the transactions are ordered in one way and that this cannot be rebooted. Now, if we change to a world where this consensus not only decides the order, but also what's valid, then we've reduced this extremely hard, extremely strict cryptographic assumptions to just crypto economic assumptions. And these crypto economic assumptions like they are in the order while the proof of stake, hopefully billions with proof of work, realistically, it's actually just a few millions. And this loss of security is insane. That's not something I would want to build the world financial system on as an example. Consensus nodes are still super important because they do determine the censorship resistance, but the censorship resistance without the security guarantees is useless in my opinion. What is the point? You have this cool network that works very hard to stop someone from sending transactions, but you have no idea who owns which coins, because they can do anything they like with that. I don't see that. That to me seems like building a skyscraper without foundations, like just on sand. 

[00:08:50]In addition, if you have this security, if you have this peer to peer network, then the nice thing is that most faults are recoverable. I'm not super worried. I mean, 51% attacks are really bad of course. I don't think that they should be irrecoverable. I think we will see them in our lifetimes on the most secure networks. We have seen them on smaller networks, but we'll also see them on the more secure ones and we will be able to recover from them because these miners and stakers, they are just one specialized role. They only have this role of ordering the transaction and if they don't do that properly, if they start censoring, then we're going to replace them.

[00:09:31]And the reason why we can't do that is because the loss of censorship resistance, while very annoying, is usually tolerable for a short time. Like for some hours, it's a loss of service. You will experience that. You can also lose your internet provider for some hours. So that is very rarely fatal. There are some exceptions in applications where it is really, really bad, but let's say for the majority of users, it's not the end of the world. 

[00:09:56]So to come to the conclusion of my opening statement I actually don't claim that the number of full nodes is the important quantity. I think it's not the best to just count the number of full nodes and make claims about that. Of course, we could pay people to run tens of thousands of full nodes and they're going to do it, and it's great we have all these full nodes. That's not going to add anything to the network. I don't claim that. What I say is that what is important is that the majority of the value is behind full nodes. So that means that users who interact with the network and have large amounts of values, they use full nodes to access the network. And if most people do that, then whatever you can do by controlling the consensus by having a majority is very, very little, because you can never fool these majority of users into accepting invited transactions and accepting your 10 trillion ISA that you have printed or into like accepting that you stole someone else's coins or something like that, and that's just not possible. And so suddenly the incentive to try to even control the consensus goes way down. 

[00:11:05]So I started coining the term for that, which I call "Consensus Extractable Value". This is slightly different from Miner-Extractable Value, which is only the value that you can get by controlling a single block, which miners do at the moment. But what can you do if you actually have the majority of the consensus? And I think it's a big problem if for that you can suddenly change the whole rule. And that is basically what Anatoly is claiming that it's okay if the majority of the consensus can change the rules and I think that's not okay and we need to minimize this consensus extractable value. We need to make sure that most users will never follow any unilateral change of rules that the consensus nodes introduce. 

[00:11:48] Richard: [00:11:48] Okay. Thank you, Dankrad. So Anatoly, you can respond directly to Dankrad now, or I can ask you some questions.

[00:11:55] Anatoly: [00:11:55] Yeah. So, fundamentally I think what's interesting about the approach of ETH-2 staking is this idea that post-strike, right. That we don't care about if somebody nukes the network. That there's at least some survivors that can say that, "Hey it was nuked. Let's rebuild". Is that okay Dankrad if I make that analogy? Is that act close enough? That's my understanding of the ethos there or the goals there. 

[00:12:23] Dankrad: [00:12:23] I would say that is certainly one of the goals, but that is definitely not the only design goal.

[00:12:29]Anatoly: [00:12:29] Yeah, so I think that's, that's valuable design, but I think what's interesting is that if you maximize for the reduction of this possibility of the strike, not by moving consensus, but removing it's the possibility of that strike happening, but the maximum number of nodes that add up to 33%, so large, that's truly independent nodes, that probability becomes lower and lower at an exponential rate that also forces this increase of the likelihood of somebody surviving. Because to maximize the minimum set, to get to 33%, you invariably have to maximize the total set of these observers if everyone else is following the chain. And the key part here is that the way that Solana the network is designed is to propagate this information simultaneously to all the participants in the network as fast as possible so that everybody's receiving this data almost at the same time. 

[00:13:32]You can't accomplish that with sharding. You can't accomplish that with subcommittees or any kind of reduction of network bandwidth, and this is why Solana requires a ton on bandwidth. This is why it requires ASNs and data centers and hardware. But what that guarantees is that as that set gets bigger, the probability of one survivor also goes up. The probability of anybody actually pulling off this attack goes down and the information is that this even occurred is propagated as fast as possible. And Solana as a network, the way I think about is that it's not designed for the currency goal of like store value, it's designed as an information system. The cool thing that we want to accomplish is that if this is the world's price discovery engine to follow the financial markets are running on it, then it's really, really important. Like true censorship is that regular users have access to data that is 15 seconds behind, like on Yahoo Finance and all the sophisticated users have access to data that is immediate and actionable, and profitable, that's the world that we want to avoid. We want to get to a point that the regular user, sure it takes some work, but anybody can go call up a data center. Anybody can build this box. Anybody can plug in and have the exact same level playing field as the best market makers, the best exchanges in the world.

[00:15:04] And this is accomplishable with that work, right? Like I put in the work, I go through the trouble of doing that and I go deploy the system. Sure, it costs a bit more money than running it on a laptop, but I am directly plugged into the NYSE of the world, right? The CME of the world. And every time somebody does that, that increases that long tail. Every time the stake is more distributed, that increases that fat head, that minimum set of colluding minors. And that thing approaches this like perfect information symmetry around the world, that there is no extractable value from this idea that somebody gets access to this data first and somebody gets access to it later. In my mind, we already see this playing out with flash pods and node pools and how DeFi works in each one. 

[00:15:57] Richard: [00:15:57] Okay. Dankrad, feel free to follow up. I have some questions in mind on my own too, but you should follow up if you have counters. 

[00:16:03] Dankrad: [00:16:03] Yes, please. So my first thing, I think that they were two very different extremes of what Anatoly said there, I want to respond to them in order. So the first one is about a World War 3 nuclear attack type of scenario. And let me be perfectly clear here. So basically I don't think this framing of the biggest threat is random faults or something like that. I think that, that is not what is important here. I fully agree if the only thing you are 

[00:16:36] Anatoly: [00:16:36] I don't mean love 

[00:16:36] I don't mean a literal war, it's an analogy, right? If you look at ETH-2 stake distribution, Binance, Kraken, and Coinbase make up 33%, three colluding actors that have, their own intents can cause this invalid state transition. That's what I call a nuclear strike. 

[00:16:55] Dankrad: [00:16:55] They cannot. That's exactly the point. Like they cannot cause this invalid state transition because an invalid state transition is simply not accepted by anyone else. They can try signing that, it just has zero meaning in our network. 

[00:17:07]Anatoly: [00:17:07] For anyone else. Correct. But not for all the other exchanges. They could bisect, right? They could basically cause a double spend at, let's say FTX. 

[00:17:18]Dankrad: [00:17:18] Not if FTX is running a full node, they would not cause this because nobody would follow it. 

[00:17:23]Anatoly: [00:17:23] In the sense of double spend, they could still do cause they partition the network, they present FTX one transaction and then like some other exchange, let's say Huobi, another one. Both valid transactions, right? But both that requires that information split. That's what I'm talking about is the nuclear strike is how many colluders do you need to do that? 

[00:17:45] Dankrad: [00:17:45] Okay, so we need to be very clear here and I think this is a very interesting point. So I just want to still finish my first point here. I think ,basically we have to make a very big distinction between random faults and non-random faults. If someone tries to hack the machines or if they are hit by a meteor, then the pure number of machines is an interesting quantity.

[00:18:12] For that 100 machines would be completely enough, right? The probability of them all being taken out is already so low if we distribute them across the world, that's fine. I'm not worried about that. And you're making the right point here, Anatoly, which I agree with. The interesting thing is when you have entities that have an incentive to collude, the bribery model is irrelevant one for blockchain, because bribing happens all the time. They call it bribery and we think of them as, "Oh, if you get bribed, you're dishonest". Well, it's not so easy. Right? There's so many ways of bribing people that people will not see obviously as dishonest. If you put your coins inside an exchange, and they pay you a 5% interest rate, you might be bribed. They must be using your coins to do something with them, like vote somewhere.

[00:19:04]So it's so easy to bribe people, without it being obvious, we should move away from thinking of bribery as being dishonest behavior and just accept that it is part of what we're building. We have to design our processes, our protocols, so that they resist this bribery. Now we're coming to this, what you said: a small number of entities, for sure. I'm not claiming that they can't stake distribution in ETH-2 as ideal. It's not horrible, but it's not ideal either. There could be some entities that collude to say a double spend. If the chain is finalized, then realistically that is not going to happen, right? We have a strict rule that no client will ever reboot a finalized checkpoint in these two. So I don't see how they could do that. 

[00:19:51]Anatoly: [00:19:51] Yeah. That's a dumb example, right? But let's say Coinbase, Kraken and Binance wanted to take out OKEx and Huobi they would double spend on both, convert that to Bitcoin and leave those exchanges holding the bill, right? Not that that would ever happen because there's other factors that would prevent them from doing so, but that's kind of like the nuclear strike option. How hard is it to pull it off? I think the more interesting part for maximizing censorship resistance is creating this equal field of information symmetry. That's not a use case as directly tied to security, but it's critical for finance. It's critical for open and fair markets. If you do that, if you're optimizing for core security, the minimum set of nodes that add up to 33%, those other aspects that ETH-2 is I think doing a pretty good job at solving, which is how do we get as many people watching some part of the state, also become easy because the SLA on propagating that information about state transitions to every node and including the maximum number of nodes might be that, the only surviving node that isn't interrupted to even say that there's been an invalid state transition, that's much smaller, much tighter. It's better for that information to propagate in one to two seconds than in 10 minutes. Because the stability of finance that's running on top of it depends on that, right? As soon as that, red alert goes out, people can pull their circuit breakers. The faster that happens, the less damage is done, right? In my mind, if you optimize for that core security group, maximize that set, make him as independent as possible, then these tail end of other things that are positive effects also become easier to accomplish. 

[00:21:51] Dankrad: [00:21:51] I actually want to jump in here because you already mentioned ETH-2 state distribution. you happy with your distribution? 

[00:21:58]Anatoly: [00:21:58] No, it sucks. 

[00:22:01] Dankrad: [00:22:01] I look at Solana and 28% is AWS and 24% is [inaudible] now. I think that would really worry me if I had 

[00:22:10] Anatoly: [00:22:10] ETH-2 is worse. So you're talking about ASN distribution, data center distribution, these are all aspects of Nakamoto coefficient and like we're working on optimizing for those, I think we're the only ones that seem to (be) looking at ASN's. Who even publishes that, right? Unfortunately, the only data that people publish on most of these networks is who are the top stake validators. We actually spend a lot of time drilling down where are you located, what's your data center, are you on a separate ASN, which means separate internet provider, even if you're in the same data center, because all that stuff matters. 

[00:22:50]Dankrad: [00:22:50] Right. I would say we are looking at it for sure, but I think difference is also, we do not rely on this for security. We have a fallback, we can fork out the censors which I don't see that Solana can do. 

[00:23:04] Anatoly: [00:23:04] What's the SLA for that flag to, you know, for that red alert to fire? Is it 10 minutes? 

[00:23:12] Dankrad: [00:23:12] Which red alert?

[00:23:13]Anatoly: [00:23:13] Some shard creates a corrupted state transition, they publish a header that creates a single...

[00:23:20] Dankrad: [00:23:20] So we aren't currently planning to do execution, I should be clear on that. Our plan is to only have data availability on shards. A shard itself cannot create Ether. It would be the roll-up construction on top of that.

[00:23:34] Anatoly: [00:23:34] So the accident of the roll-up requires a fraud proof check, right? 

[00:23:38] Dankrad: [00:23:38] That's correct. Yeah.

[00:23:39]Anatoly: [00:23:39] That's still that the underlying layer has to... the base layer has to run those fraud proofs, right? 

[00:23:45]Dankrad: [00:23:45] Correct. But there's a timeout of one to two weeks typical for a proof construction. So we just need to be sure that we get that fraud proof in within that deadline, which is like a very long time, even to fix by major problems. 

[00:23:59] Anatoly: [00:23:59] Sure. But I can exit out of the rollup because those two weeks never happens, right? We're going to be running for 10 years. No one's ever created a fraud proof. Everyone stops paying attention to it. Those nodes go down all the time. They're delegated to the B-team and somebody just like a shark gets hosed and they exit a million ETH out of it. 

[00:24:23]Dankrad: [00:24:23] That's an interesting question. I think we can actually solve that. You can essentially pay people for watching it in a decentralized way, like proof of custody is the relevant construction here, where you basically ensure that people actually do the computation, verify it, pay them for that. 

[00:24:44] Anatoly: [00:24:44] But how fast would you even fire that alarm, in your mind? Is it going to be hours, minutes, seconds, days? 

[00:24:53] Dankrad: [00:24:53] Why is it not milliseconds? It's just running a computation and immediately producing this proof that says, yeah, this was incorrect. There's no human operator, that is what is important to me. 

[00:25:08]Anatoly: [00:25:08] The shard that's running this computation doesn't have to share this data with anyone else because they're the only ones that's processing, right? 

[00:25:16] Dankrad: [00:25:16] No, that's incorrect. That's an essential part of shared security. But so we're again, going very deep into sharding here. I'm not certain if it's the right thing, but I just want to make clear that what Anatoly just claimed is not right. Shared security means the availability of all shards at every instance, guaranteed for all the nodes, like data availability checks exactly that. 

[00:25:41] Anatoly: [00:25:41] But my tiny laptop is not processing all the data from every shard. 

[00:25:45] Dankrad: [00:25:45] Yeah. It doesn't need to. That's the amazing thing. It doesn't need to do that. We have a construction that only allows you to verify the whole thing by only taking a small number of samples. 

[00:25:57]Anatoly: [00:25:57] But I'm not executing any of the state, so what I'm trying to get to is that my laptop, which is participating in ETH-2, will see part of the state - part of the proof that somebody else has all the data, but at what point does anybody actually verify that something invalid happened in that shard? 

[00:26:18] Dankrad: [00:26:18] So that is part of the roll-up construction and I just said that you can have people who are incentivized to watch that and immediately produce for proofs when they see it.

[00:26:28] Anatoly: [00:26:28] So in your mind, if the shard hides its data except for the availability groups... 

[00:26:35] Dankrad: [00:26:35] Well you can't hide the data, when you produce the availability proofs, they actually ensure that the data is available, I mean, that's the whole point of them. 

[00:26:42] Anatoly: [00:26:42] They don't ensure that it's available to everybody all at once, because that would require like quadratic bandwidth. 

[00:26:49] Dankrad: [00:26:49] And that's exactly what sampling does, so you encode the data in a way that even if only some percent, say 50%, of the data is available, then you know that the whole data is available. Because if you have 50%, you you can readily construct the rest. That's the encoding part. The second part is random sampling. You take a small number of samples, say 30, and if all of them returned positive, you know that with probability, at least one minus two to the power of 30, so very, very high probability, the whole data is available because if less than 50% were available. Then one of your samples would fail. That's how data available checks blocks. So they do ensure that the whole data is available to everyone. 

[00:27:31]Anatoly: [00:27:31] And that happens 10 minutes later? 

[00:27:32]Dankrad: [00:27:32] It depends. There are several layers here. 

[00:27:35]Anatoly: [00:27:35] So at some point in time for you to guarantee that this invalid exit out of a roll-up was fired, somebody has to reconstruct all that information, it and then fire PagerDuty? 

[00:27:52]Dankrad: [00:27:52] What is the PagerDuty doing in be 

[00:27:56] a fraud proof is pure 

[00:27:57] something you send over the network that automatically gets accepted by all the nodes and instantly makes them change their view of the network and notice, "No, this part of the chain, I will not follow it anymore because it is no, 

[00:28:10] there's no human operator. There's no PagerDuty. There's nobody. This is an important part to me actually. way 

[00:28:16] why I'm arguing forarguing for these full nodes want anyone to have to wake up in the middle of the night to make sure that the network keeps running. This is the point. that 

[00:28:26] the full nodes will just not follow the invalid and, And if that's not the case, then I'm very worried because it's very hard to revert it later, like hours after it  

[00:28:37] Anatoly: [00:28:37] But that's true about every properly constructed Layer-1, that invalid blocks get dropped by valid nodes, right? No, one's arguing about, but because sharded systems hide some of the computation they don't force everybody to replicate. 

[00:28:53] Dankrad: [00:28:53] We're going off from the debate and we are very far apart. I don't even know why we are talking about sharded system because actually our...  

[00:29:01] Richard: [00:29:01] if I understand correctly, I think Anatoly is essentially saying that the way ETH-2 is set up. It's unable to achieve its scalability goals and some sacrifices have to be made.

[00:29:15] Anatoly: [00:29:15] The problem is that like this idea that you have a bunch of small nodes that partially verify things and like eventually get to some point where there's a fraud proof generated, that Delta is so long that it's prohibitively long for finites of scale. Why would I have a system where I'm going to wait 10 minutes or however long before something, before it detects something goes wrong, versus a system where that's guaranteed to occur in like 400 milliseconds.

[00:29:45]Dankrad: [00:29:45] Because your system relies on the honest majority of relatively small number of validators, and that is something I'm not willing to accept.

[00:29:58]Anatoly: [00:29:58] It doesn't rely on an honest majority. 

[00:30:01] Dankrad: [00:30:01] I'm not able to verify it because as a small time user, I will not invest $5,000 into a node. Plus, a connection gigabit internet connection, I mean, fine maybe we will have a gigabit internet connection, but I don't want to sacrifice one gigabyte or gigabit of my bandwidth constantly just to run Solana.

[00:30:22]Anatoly: [00:30:22] Okay. But me as a small laptop with one megabyte per day, I'm not going to verify all the state transitions and all the ETH-2 shards, because assuming that it's running the same number of use cases as Solana, it would require the same number of bandwidth of total. So I'm relying on my partial computation to give me some statistical... 

[00:30:44] Dankrad: [00:30:44] We have solutions that scale for all of these problems.

[00:30:48]Anatoly: [00:30:48] So assuming the rest of ETH-2 is corrupted, how is my node with one megabyte per day, going to detect that something went wrong? 

[00:30:57]Dankrad: [00:30:57] They're are two things that you need to do. One is data availability SIM sampling to make sure that all the data is available because otherwise someone can hide invalid data in something that they don't make available to anyone, and that prevent them from producing fraud proofs. And the second thing is fraud proofs. You then need a way that instantly communicates to everyone in the world. is an invalid chain, do not follow it, and that's it. 

[00:31:26] Anatoly: [00:31:26] So I have a thousand shards, aggregate amount of data that we're producing is equivalent to Solana, and I have my small laptop, but this a thousand shards are all corrupted. They all got those except for my small laptop. until I get to produce this fraud proof?

[00:31:45]Dankrad: [00:31:45] As long as one of them is corrupted. That's enough for you. You don't need to follow the rest. 

[00:31:49]Anatoly: [00:31:49] How how long until I discover that the rest of the network is fully hosed? 

[00:31:54] Dankrad: [00:31:54] Sorry, again? 

[00:31:55]Anatoly: [00:31:55] I am running an ETH-2 node on my laptop. got megabyte per day allowance. 

[00:32:03] Dankrad: [00:32:03] I do not know where one megabyte per day day comes from, that is a very, very low limit. 

[00:32:08] Anatoly: [00:32:08] Sure. W Whatever. One, one-hundreth the rest of ETH-2. You have a hundred shards, my laptop can only handle the capacity of one shard. Something horribly goes wrong. The rest of ETH-2 is corrupted, except for me, how long until I find The spot where that corruption occurred and create a fraud? 

[00:32:28] Dankrad: [00:32:28] I think this is getting super out of hand. We're getting some into so many little little little topics because you're insisting on- I thought the debate topic was very clear... 

[00:32:39]Richard: [00:32:39] can you reel back and explain for us how this is related to the original topic? 

[00:32:44]Anatoly: [00:32:44] My point is pretty simple. If I maximize the minimum set of nodes to the Nakamoto coefficient, there's at least three times as many full nodes on the network, right? If that minimum set of nodes is 10,000, then there's 30,000 nodes in the network. That means that within 40 milliseconds, one of these 30,000 will fire flag and say, hey, look, something is bad happened on all the other nodes. This idea that like you have more survivability, on sharded systems or building a system that is doing these complex operations, I think is going to achieve less security for even what it's trying to do, than by just doing the obvious, maximize the minimum set of nodes that add up to 33%. That is the core part of what you're doing to create security. if you achieve that, you achieve everything else. 

[00:33:36] Dankrad: [00:33:36] No you don't, you only get crypto economic security and you don't get cryptographic security, which is not enough. This is bad. I really challenge you. No you do not get cryptographic security because the majority of your stake can just change the rules. 

[00:33:52]Anatoly: [00:33:52] Same as in ETH-2, that fork is invalid. They don't follow it. Those nodes get dropped. 

[00:34:00] Dankrad: [00:34:00] But I as a user, I don't have a way to detect it because I just trust the majority. I don't have a way to verify the chain, so I will trust the majority, and they suddenly, they will make me follow an invalid 

[00:34:12] Anatoly: [00:34:12] Right. So your point, I don't have a way to detect it is what I'm arguing. In ETH-2, my tiny laptop, assuming everyone else is corrupted, is going to take days to get all that data and process it and detect it. In Solana, my expensive machine, that me as a person can deploy anywhere in the world permissionlessly, will, as soon as I deploy it, it's one to two - one second, at max. I will detect this, that the rest of the network went bonkers, and yet it costs more money, but I can do it. I can actually achieve what I want with extremely tight bounds on information. 

[00:34:50] Dankrad: [00:34:50] That not how how users will run blockchains in bought I mean The majority of people, they will have a mobile phone, so we need to be able to create a fully validating client that can run on a mobile phone, and this is our goal. This is what we're doing. like concretely you still can't achieve that. So I don't see how Solana is going to do that The mobile phone user in Solana is going to rely on the honest majority of validators, and if that fails, if they become, they just decide to change the rules, they may have have I use for they'll say, our our reward is too low. We need to increase And don't they think they're dishonest like they have a valid reason to change the rules. So they're just going to do it.  And the users don't get a say in it This is the problem. 

[00:35:42] Anatoly: [00:35:42] So without like an actual bound, on how long you expect a phone to detect that the rest of the network has corrupted? 

[00:35:51] Dankrad: [00:35:51] instant It's instant because the only thing you need is this very small fraud proof that says here's what happened. Here is someone who created the block that did not follow 

[00:36:02] Anatoly: [00:36:02] But that doesn't exist. There's no zero knowledge proof that fully rolls up... 

[00:36:06] Dankrad: [00:36:06] do You do not need a zero knowledge proof that That is incorrect. Roll-ups do this today. Look at Arbitrum for example. Look at Optimism arbitrarily do this right now. They both have a system where they um but they can run something that is either the EVM or very similar to the EVM and they just run it in such a way that they produce a log of the data so that you can provide a very small message, approve that something incorrect and invalid state transition has happened. Someone has changed the rules here, period. And then you don't need to worry about that chain. You get that short message and you're like, this This chain is invalid, which is the next one, which is the next one that has the most 

[00:36:53] Anatoly: [00:36:53] But there's no way like, roll-ups compress the execution state, like all the intermediate execution state transitions. 

[00:37:01] Dankrad: [00:37:01] don't They don't need to compress the executions. The point is, if someone does an incorrect execution, you can challenge that using a fraud proof, you just send a small proof this is incorrect, and that's done. You don't have to worry about this 

[00:37:16] Anatoly: [00:37:16] Like I said, how long is that going to take to detect? 

[00:37:19] Dankrad: [00:37:19] As soon as someone runs it look... 

[00:37:23]Anatoly: [00:37:23] So how long until my laptop runs it? I have one laptop... 

[00:37:27] Dankrad: [00:37:27] It's not important that your laptop runs it, someone else can run it, they will send you the fraud proof, and then you can reject the transaction, 

[00:37:36] Anatoly: [00:37:36] I'm relying on trusting somebody else to go through this work. 

[00:37:41] Dankrad: [00:37:41] You are relying that one out of the probably thousands of people who run full nodes to verify this, roll-up like roll up full nodes. That one of them will be honest and generate this for approval. But that a a one out of N honesty assumption. Very different from a majority  You are claiming there's no difference between assuming that 66% of stakers are honest, and that one out of thousands is honest. You're claiming that's the same? 

[00:38:10] Anatoly: [00:38:10] No. 

[00:38:11]Dankrad: [00:38:11] You agree that one out of N is a much, much weaker assumption. Like I'd need to rely on much, much less than to get a majority honest assumption.

[00:38:21]Anatoly: [00:38:21] A hundred percent agree with you there. But when you maximize Nakamoto coefficient, you're still behaving by these rules. That the total number of validators is at least three times as much. And you only need one out of them, one of them to notify you that something went wrong. 

[00:38:38]Dankrad: [00:38:38] No, the point is your notification is someone sends you a message, but that might be too late. My point is with the fraud proof, they are machine processed. They are processed automatically And it's, again, your node will not even show you this chain, your node will not show you this chain at all. You are hoping that someone will put it on Twitter, or give you a call, but maybe the transaction is processed automatically. Maybe by the time you have seen that, the money is already gone, someone has cashed out, converted into something else, you can't trace it anymore. 

[00:39:16]Anatoly: [00:39:16] So that data availability part, all those things that you're doing with sharding in ETH-2, that can be done just as on any other system, right? You can sub sample, you can do all these things later. And the later part that's the key part here is that if you optimize for the core Nakamoto coefficient, then the probability of all those things working and synchronizing much, much faster goes up. 

[00:39:44] Dankrad: [00:39:44] Oh, look, I am all with you if you're saying, "Oh yeah, we can put Solana, turn it into a roll-up so that you have data availability, fraud provability" - I'm all for that. Actually that's what I want people to create on ETH-2. However, I'm not convinced that you're building the right system for that. Because you don't even commit to state rules, that is the very first and most essential step. So go ahead and do that. And I'll believe that you want to do that, but right now you don't have it.

[00:40:13]Anatoly: [00:40:13] Because it's not as important as maximizing the Nakamoto coefficient.

[00:40:18]Dankrad: [00:40:18] In my book security is the most important part. I don't want to blow the system that's not secure and I think your system is much, much less secure. Like we're putting the security first. 

[00:40:29] Anatoly: [00:40:29] I don't want to build an insecure system. 

[00:40:32] Dankrad: [00:40:32] But you are relying on the honest majority, you're relying on crypto economic majority. 

[00:40:37]Absolutely not! How? I can run a node, so can anybody else that cares about security can go run a node. Me as a user, I'm not trusting the majority of the network to behave correctly, right?

[00:40:50] So you literally set out the debate topics, say it doesn't matter whether people run full nodes. The only thing you need to optimize is this minimum set. I'm sorry, like now you're contradicting yourself.

[00:41:04] Anatoly: [00:41:04] No, because the side effect of optimizing that minimum set is that the maximum set is at least three times as large. If you optimize for the minimum set of nodes, the probability of anything going wrong, goes down exponentially. That's step one. And that's critical for normal financial markets that you need to make sure that the possibility of this happening is as close to zero as possible.

[00:41:28] And then this like idea that somebody somewhere, sometime later detects it, that also goes up because that set is at least three times as much, but because you've optimized for number one, you've created a system where that information propagates as fast as possible. Tell the user, "Hey, something went wrong." or I can't actually tell them the data, "What do you want me to do?" Right? A bunch of things to go wrong. That delay is the delay that breaks finance. So this is like the core of what we're doing. Right? It It's not securing the securing the information and how fast this propagates, can only do that by maximizing the Nakamoto coefficient.

[00:42:08] Richard: [00:42:08] The thing is that maximizing the Nakamoto coefficient, we all understand, I think is something that we want to do in order to ensure security of the network, everything else being equal. Doesn't it stand though that it helps to have more full nodes.

[00:42:23]Anatoly: [00:42:23] But you can't maximize the Nakamoto coefficient without also maximizing the full node count. Because that full node count has to be at least three times the size of the Nakamoto coefficient. 

[00:42:34] Richard: [00:42:34] Wait, so you're saying...

[00:42:36] Anatoly: [00:42:36] if you only maximize for full node count, right? I have these partial systems that do partial computation, nobody actually fully observes anyone, you're relying on like that somebody somewhere figures out that there's enough data to go signal that something went wrong. That delay, is give me the SLA. Right? Give me something that shows that we will do this with an X amount of minutes. Then it's an interesting question. Like, Is that achieveable? 

[00:43:02] Dankrad: [00:43:02] Why should it take longer one second to distribute a proof? It's like a block of a few hundred kilobytes, I just send it to the peer to peer network. Anyone sees immediately, this is an important message. Just distribute it to everyone.

[00:43:18] Anatoly: [00:43:18] But they're all hopes they're like with the question, the scenario we're talking about everybody's corrupted.

[00:43:24] Dankrad: [00:43:24] My assumption is that you are connected to at least one honest node. That is important. I mean, you need that in any system. This you cannot change. And that one node can give you the fraud proof. 

[00:43:38] Anatoly: [00:43:38] Sure. That one honest node, how long does it take for them to get the information from one? 

[00:43:45] Dankrad: [00:43:45] I mean, they are presumably connected to more honest nodes, so we need some network that... 

[00:43:51] Anatoly: [00:43:51] So how many percentage of honest nodes right? This problem needs to be better defined. But the Nakamoto coefficient is a simple definition. That's easy to understand that if you maximize that critically, improves the core security things that you're relying on some unknown number of honest nodes. 

[00:44:10] Dankrad: [00:44:10] Look, look, I do not disagree in terms of censorship resistance at all, but I disagree in terms of security. I think you can do much, much better than that the Nakamoto coefficient does not tell you anything about it. 

[00:44:22] Richard: [00:44:22] Can you expand on that point a little bit, Dankrad? What do you think the Nakomoto coefficient doesn't really help with security?

[00:44:27] Dankrad: [00:44:28] Well, because we want security to be independent of the honest majority of the stake. We want security to be purely resting on cryptographic assumptions. And we can actually do that or maybe very, very minor. So this is in terms of when we go to Optimistic roll-up, then we make very, very minor assumptions about the network, but they all come in the form of one out of N honest assumptions. So like we have thousands of full nodes, you are connected to maybe 50 of them, right? And you assume that you're connected to at least one of the honest full nodes and that they are connected with each other. And that one out of all of them is going to detect the fraud.

[00:45:15] That's all the assumptions we need to make. And this is a much, much better assumption than having to rely on that. The majority is never going to, it's never going to cheat, especially when the incentives are different. The problem is there can be very, very strong incentives to cheat. And so this is another problem with the Nakamoto coefficient, that it doesn't tell you, like they might be completely different entities, right?

[00:45:39] One of them might be someone in China, one in India and one in all different data centers and all different people doing it. But they still all have in common that, oh, it would be nice if our rewards would double, that would be nice, right? So the Nakamoto coefficient can't tell you that, it doesn't help you with that. If the incentives are aligned, they might still do it. You have to consider this bribery moderately and the Nakamoto consensus does not tell you about security. That's what I'm saying. 

[00:46:09] Anatoly: [00:46:09] But you're, I think setting up a straw man, right? Like all layer-1s' invalid state transitions, can be detected by one honest node.

[00:46:19] Dankrad: [00:46:19] Right, I agree. The question is, is that honest node able to automatically stop all the other honest nodes or the large clients, especially like which we are running in the future? Is it going to be able to stop them from following that wrong chain? Because if not, well, if it takes two hours to call everyone and alert everyone on Twitter and so on, then by that time the attacker might already have cashed out and gotten a lot. So we need an automatic instant system and that's only possible if you have fraud proofs that can be automatically verified.

[00:46:54]Anatoly: [00:46:54] That propagation, that detection, that delta to detection is much, much faster in a network that is single state machine, fully replicated, maximum Nakamoto coefficient because that's guaranteed to replicate to three times that amount of nodes. How you propogate that signal? How you propogate that signal, if you can detect where that data is, you can easily point to it and say " Hey, look, here's the invalid transition."

[00:47:23]Dankrad: [00:47:23] need the full Solana state before that transition, in order to know that you're telling me the truth. Basically, I still need to run a Solana full node that's that's that's recursive. You're like... 

[00:47:38]Anatoly: [00:47:38] No. Users that care about security or groups of users that care enough about security can run their own full node, they're guaranteed within that local group that they're super connected to everyone else, within a very short time. 

[00:47:52] Dankrad: [00:47:52] You are saying then now that users should run full nodes. 

[00:47:57]Anatoly: [00:47:57] Anybody that wants to can run it, it is open and permissionless. Sure. I can reduce my security assumptions for some use cases, but not others. What I do care about is that, I have this information, can propagate as quickly as possible. And the probability of this occurring is as low as possible. And to reduce the latter, you have to maximize the min set of nodes that collude. 

[00:48:24] Dankrad: [00:48:24] The problem is just I need to have a very valuable use case in order to run a Solana full node. Like even your run of the mill millionaire, isn't incentivized to spend thousands of dollars just in a super internet, super high speed internet connection that's constantly like- they can't even watch Netflix anymore now because they're Solana node is eating all the bandwidth. 

[00:48:52] Anatoly: [00:48:52] That's not true, right? I'm able to run on my fiber at home... 

[00:48:59]Dankrad: [00:48:59] What's your connection? What's your connection? 

[00:49:02] Anatoly: [00:49:02] I have one gigabit up and down, Right? 

[00:49:04] Dankrad: [00:49:04] Well, yeah, that would be nice, I would love to get that. I can't get that here. Right? So my maximum upstream is 100 megabit. I believe, so... 

[00:49:14] Anatoly: [00:49:14] agreed. Agreed So it requires a certain amount of work. Right? 

[00:49:18] Dankrad: [00:49:18] Right But you're saying the only people who are going to be realistically secured in the US are maybe like traders, really like wanna like, have, do like very high speed trading, or people so rich that, it makes sense for them to spend these thousands of extra dollars just to get the security. 

[00:49:41]Anatoly: [00:49:41] This is I think the fundamental different goals, I think, between what the Solana network is trying to achieve and what Ethereum trying to achieve. I think the fundamental difference is that the Solana network is trying to guarantee that, everybody that wants can get to the same level playing field as the best possible super market makers, traders, exchanges, that are trading DeFi.

[00:50:09]Dankrad: [00:50:09] But I thought we just said you need to invest into a Solana full node, I don't quite agree 

[00:50:15] ... that everybody can buy hardware for 5000 dollars and get a gigabit internet connection. 

[00:50:23] Anatoly: [00:50:23] But it's impossible for me to do that anywhere else. It's impossible for me to have the same access as anyone else in any 

[00:50:31] Dankrad: [00:50:31] Then let's clarify that by everyone, you mean people who have this kind of capital, uh, and incentive do that, that that is, that that is bar. Like I mean, excludes 99% of the world's population.

[00:50:46]Anatoly: [00:50:46] Sure, but that 1% is still quite large and the benefit that this provides, is fair and open market to the world's financial information, that it's not like locked up at NYSE or CME. And the system can't be as fast as NYSE and CME. Like speed of light of fiber is as fast as news travels, the whole point of this is to make state transitions propagate speed of light through fiber to get that SLA of something going wrong to be as tight as possible around the world. And yeah, it's expensive, but it's not censored. Right? It's not behind a wall. It's not behind like, concrete building in Manhattan that you have to like, you know. I don't even know how to get my trading box into that thing. Right? It's impossible for me as an individual.

[00:51:35] But because this is all open source software, it's all open. It's an open network, an expensive hardware, but still open commercially available in any city in the world at this point, like that's the cool part about it. 

[00:51:48] Dankrad: [00:51:48] So, I summarize that then the main goal of Solana would be building a decentralized exchange?

[00:51:54]Anatoly: [00:51:54] It's to propagate, like it's that censorship resistant piece, right? How do we ensure fair and open access to information? That's the underlying goal and it does not sacrifice security. What it does sacrifice on, as you made your point, is cost, but it can achieve this thing that you can't do with like subcommittee sharding or anything else. Those delays means that I am stuck with Yahoo Finance data that's 15 seconds behind all the important people have all the best access. 

[00:52:26] Dankrad: [00:52:26] I mean, I think there are some misconceptions as well, at least in terms of MEV. I think Solana will have exactly the same problems about MEV. 

[00:52:36] Richard: [00:52:36] So Kyle Samani wrote something about how there might not be a difference between having one million nodes that runs Solana versus having 10 million nodes that run Solana. So there's, it's hard to quantify the threshold at which decentralization is at a reasonable amount, the more, the better, but it just seems that marginal amount of nodes being added doesn't add much to the network think is the- maybe one of the crux of the issue here?

[00:53:06]Anatoly: [00:53:06] Yeah, because the that there's at least one non-colluding node and that set goes up exponentially, right? 

[00:53:14] Richard: [00:53:14] So I think another related question for Dankrad, this is from one of our audience members then is if you are saying that ETH-2 has been designed to enable people to run full nodes on their phones or cheaper personal devices, what do you think is the incentive for people to do that? Aside from the fact that it is cheaper, so then it seems more economically effective to do so, but why would people still do it? It's cumbersome. It still takes resources.

[00:53:45]Dankrad: [00:53:45] Yeah. So, I is, unfortunately I mean, we're go as low as possible on that. it as easy as possible to get this property that I mentioned earlier. So basically we need to opt to minimize this consensus extractable value. Like we need to make sure there will always be some users who will just rely on the honest majority, at the very minimum it's bridges Right? We cannot build bridges that give full security guarantees. They always rely on the honest majority assumption, um, But they are the better for the network because it reduces the incentive to even do that. And of course, for the users, the incentive is small. Um, they be protected from any kind of attacks.

[00:54:29] So we have to make it super simple. I think if you can run it so cumbersome? Right? Of course it needs to use barriers of your bandwidth at the same time. That's why we're not optimizing for a gigabit internet connection, but we are optimizing for normal internet connection and it can use at most a few percent of that. And the same for CPU, like Ethereum is not designed to use 100% of your CPU, like it's designed to run on less than CPU. Right? And that's because we need this property that it's like just super simple and just a no brainer to run the full node, which is honestly not the case right now with each one.

[00:55:07] But we are working on that. We have several pieces in the works.

[00:55:11] Richard: [00:55:11] I'm just curious if we were to plot the number of voluntary full nodes versus the cost to run a full node, right? On the Y axis, you have the number of voluntary full nodes. On the X axis, you have the cost to run full nodes. What does that function look like? It's definitely a monotonically decreasing function, but what is the slope between the $3000 price point and say, $500 price point. Is it very steep? Is it actually not that steep? 

[00:55:40] Anatoly: [00:55:40] Yeah, 

[00:55:41] Dankrad: [00:55:41] I do not know. I think you had someone previously on the podcast who has done that. I haven't looked detailed into it. I would like to remark that this is a secondary measure. Right? Of course, like it tells you how many people are running full nodes. But as I said, I don't think that is really the core quantity you want to know. The core thing is that normal users who want to use the chain are behind a full node. I say behind, because essentially the full node acts like a firewall in that it protects you from anyone trying to pretend that's other invalid chain is the real chain. You want that users are behind that. 

[00:56:19]Anatoly: [00:56:19] I think in that we a hundred percent agree that like you need full nodes, to validate the chain and normal users should be behind them. The question is, what are the trade offs in terms of costs and communication complexity, right? We're starting from a different place than ETH, right?

[00:56:38] Like Ethereum is this massively huge ecosystem and it has product market fit and a bunch of different cases, Solana is like a nascent one that's growing pretty quickly. And the core use case that is building on Solana is one that's really well aligned for this idea of creating global like information symmetry, right?

[00:56:58] It's like being the world's price discovery engine and the number of RPC nodes that people run because their use-case justifies the cost of doing that is approaching the number of full nodes that are participating in consensus and earning rewards for consensus. And then those are like somewhere, I think it's like 700 full nodes, 450 RPC nodes right now. That's really cool to me. People actually are willing to go, roll up their sleeves, build the system, find a data center to deploy because this financial use case is so important to the world. That to me is an awesome sign, right? Like that we're on the right track, that this is an important aspect of these really complicated systems, like with a ton of Pareto efficient trade-offs, we've clearly picked one facet of that thing as being the most important thing.

[00:57:54] So having that validation, that it is important and people are willing to do the hard work to make it work is awesome. Though at the end of the day, right? Like the users like the humans actually gaining some benefit from the system are really the fundamental kind of like last line of defense for all this stuff.

[00:58:12] Richard: [00:58:12] Okay. So Anatoly you mentioned RPC nodes, I have a question about light clients from an audience member. So light clients request data from full nodes such as transactions relevant to the owner of the light clients. So when the ratio of light clients of full nodes gets too high, full nodes are overwhelmed and this compromises the performance on the light client side. If the new light clients outpace full nodes, at what point does the light to full ratio become a serious problem for the Solana network?

[00:58:42]Anatoly: [00:58:42] It's a classical read-write database problems. You have the full nodes have to handle both write throughput from the chain and then serving a bunch of reads. There's a bunch of ways to optimize that, but there's I think a separate point.

[00:58:56] I think my understanding of ETH-2 is that the light clients on ETH-2 actually verify cryptographic proof, right? That the state transitions are valid. 

[00:59:06] Dankrad: [00:59:06] No. That would be far future, but I think for now it would be fraud proof-based, as I mentioned earlier. 

[00:59:12] Anatoly: [00:59:12] I don't know what those are, but this has been like a classic web problem. How do you scale users and servers? The more important part I think to me is that there is this like growing ecosystem of use cases that supply their own full nodes and then users that are like protocol users, like a bunch of the use cases like of Serum's one single protocol. Serum doesn't run a single RPC node for all of the serum users or all the serum use cases, that there is a bunch of small groups that have spun up their own intfra. That's the last line of defense of people detecting something goes wrong.

[00:59:49] The more of those we have, the more likely that we do have robust infra to detect these, the rest of the network going haywire. But to go back to the debate and the point of it is that the thing that we can optimize for is this Nakamoto coefficient. It's a hard problem to do that because to optimize for the minimum set of nodes that are up to 33%, they have to process more data. They have to process more signatures. They have to process more information. 

[01:00:22] Dankrad: [01:00:22] This is one thing I want to challenge you on. Please tell me why you think they have to process more data if you want to maximize the Nakamoto coefficient? 

[01:00:31] Anatoly: [01:00:31] Quadratic complexity, right? What else are you going to do? As soon as you aggregate, you create some other set that can go down uncensored, right? There's no way to cheat that devil. So if you have a smaller subset, that's aggregating information, that becomes the The best you can hope for is that, that minimum set that Nakamoto coefficient is also the 33% of the state, can definitely be worse. Right? 

[01:00:59] Dankrad: [01:00:59] So I do not think a linear number of messages in the number of validators, like due to signature aggregation you can really reduce that to a much, much smaller number of messages, so I don't see... 

[01:01:12] Anatoly: [01:01:12] Maybe, none of that is live yet, right? But we're definitiely looking into like BLS signature irrigation.

[01:01:18]Dankrad: [01:01:18] The beacon chain is live right now. 

[01:01:23]Anatoly: [01:01:23] Okay. It's live. It's definitely running, but it's not like handling a lot of use cases and it's not globally deployed, right? It's not proven out yet. 

[01:01:33] Dankrad: [01:01:33] Right, but it 's counter to your point that you need these large nodes to process consensus messages, because we have 200,000 individual validators, running right now, this consensus together. 

[01:01:46] Anatoly: [01:01:47] 2,500 different machines. 

[01:01:49] Dankrad: [01:01:49] But that is irrelevant from the consensus, like from the code perspective. It is 200,000 separate entities with separate keys. I'm not talking, I know that many of them will be run on the same machine by the same person, of course, but consensus is able to handle, like, it makes no difference for the consensus, but 10,000 of these are the same person or each individual. The point I'm making is you can run the consensus with 200,000 entities easily and these two proves that - on the Raspberry Pi. 

[01:02:22] Anatoly: [01:02:22] That, aggregation aside, that information about how that propagates, you can reduce the data, but it still has to propagate to everybody in a totally open way. Right? That piece of that set being super connected is inescapable. Like you can't, I can't keep that part. If you have a single subset that's aggregating messages for everyone else. 

[01:02:50]Dankrad: [01:02:51] I don't see your point. The signature aggregation is, I mean...

[01:02:56]Anatoly: [01:02:56] It just reduces data. It doesn't reduce the quadratic message complexity to guarantee BFT. If I have a single node, that's aggregating all of the signatures for everyone else, that thing can decide when to censor. 

[01:03:10] Dankrad: [01:03:10] But there's no single node aggregating. A DEX like aggregation - anyone can do it. If I get two signatures from anyone I can just add them together and have one signature. 

[01:03:20] Anatoly: [01:03:20] But I can't trust any single party to add them. Right? 

[01:03:23]Dankrad: [01:03:23] Anyone can verify it. Just see that the signature verifies. There's no trust in here.

[01:03:29]Anatoly: [01:03:29] Yeah, I think you're talking about the point where you've already received the message versus did I receive the message or did a timeout like that? The actual flow of information through the network, can't go through a single coordinator. 

[01:03:44] Dankrad: [01:03:44] It doesn't go through a single coordinator. The signature aggregation happens anywhere on the network, anyone can do that and do that all the time So the messages get aggregated literally while they're propagating. 

[01:03:57]Anatoly: [01:03:57] Correct. But you still need that information can go through a single point of failure. 

[01:04:04] Dankrad: [01:04:04] No.

[01:04:04] Anatoly: [01:04:04] Right? that's . 

[01:04:05] Dankrad: [01:04:05] There's no single point of failure...

[01:04:07]Anatoly: [01:04:07] I agree. You have to design the system such that it doesn't propagate through a single point of failure. How long does it take for all 200,000 nodes to receive the aggregated messages for all the shards?

[01:04:20] Dankrad: [01:04:20] Well, you can look at on Beacon Chain right now. I think the threshold for an on-time attestation is 4 seconds and 99% of those arrive. So that seems to work...

[01:04:33] Richard: [01:04:33] So Anatoly and Dankrad. Sorry to interrupt, it seems like were going down the rabbit hole again, where we're cross questioning each other's design coefficients.

[01:04:42] Anatoly: [01:04:42] Those are fun conversations. 

[01:04:48] Richard: [01:04:48] Yeah, right. Dankrad, I think earlier you were having the stream of consciousness critiquing the position from Anatoly's side, but then we got sort of sidetracked and we started talking about sharding. Do you have any other angles, in which you want to attack the original position? Cause I kind of feel that you were going to finish multiple points, but they didn't get expressed as a result of us going down multiple rabbit holes. 

[01:05:15] Dankrad: [01:05:15] My impression is that Anatoly has pretty much conceded that if users want security they should run full nodes. So I do not know how far, like the difference on now. I think difference is that think fothat think full nodes be, that even this to only have a few hundred dollars at stake will run one and Anatoly says no, it's fine. I only care about millionaires That's my impression of our position now. 

[01:05:48]Anatoly: [01:05:48] I think the point of the debate originally was that is maximizing the Nakamoto coefficient sufficient? I think it is, that's the most important thing for security. 

[01:06:01] Dankrad: [01:06:01] You have conceded that running a full node is important for the security of users.

[01:06:07] Anatoly: [01:06:07] Correct. I think as a side effect of maximizing the Nakamoto coefficient, it grows the number nodes. What is the chain that's going to have the maximum Nakamoto coefficient 10 years from now? That thing is probably going to have the most full nodes too. 

[01:06:20]Richard: [01:06:20] What is the relationship between the two? How does the effort to increase the Nakamoto coefficient, actually help the number of full nodes?

[01:06:30] Anatoly: [01:06:30] Well, like if you're just looking at like standard BFT, it's at least three times. The smallest number of full nodes has to be at least three times the Nakamoto coefficient. 

[01:06:41]Dankrad: [01:06:41] But these are not actually the important full nodes. The important full nodes for security are the ones that have users behind them, which are secured by the full node, and the Nakamoto coefficient does nothing about them. Yes. It increases the number of full nodes that someone runs in order to run a validator. I fully agree with that that is not the problem. That's not what's important, the important part is, that the user are behind full nodes which you have conceded It seems to me. 

[01:07:08] Anatoly: [01:07:08] But what's guaranteed? I get as a user with a full node, when the maximum set is as large as possible, that I get all the information with the fastest time. That my full node on a network that maximizes Nakamoto coefficient provides much higher security than on a network that doesn't. 

[01:07:29] Dankrad: [01:07:29] Well, I don't see how you can provide higher security then as a guarantee, which have like cited to them. 

[01:07:38]Anatoly: [01:07:38] It's just the, delay, right? How many seconds do I wait before I verify all the computation from all the other shards that may or may not have data, right? That time, is what I'm optimizing for. 

[01:07:55] Dankrad: [01:07:55] I don't see how time delay is part of security. sure if it's very long, I would agree, but we're talking about seconds here. This is not my definition of security. 

[01:08:06] Anatoly: [01:08:06] I don't think it is seconds though. Right? Reducing the cost of a full node means reliance on taking a longer time to go figure out what the hell went wrong or relying on somebody else to detect that. 

[01:08:20] Dankrad: [01:08:20] Somebody else I mean agree agree with that. I had talked about, one out of insecurity assumptions. It is sold on the order of seconds it's not yourself was trying to do that. That doesn't scan. 

[01:08:30] Anatoly: [01:08:30] So there's that reliance, isn't just me, my full node and the rest of the network going haywire, right? Like the security model for Solana is pretty simple. I run my full node for my users. If the rest of the network goes totally bonkers and just prints invalid state transitions or state headers, right? My node detects that and notifies all the users. 

[01:08:55] Dankrad: [01:08:55] I'm full agreement with that. But once again, now you're switched to the user runs for full node. Like, I I'm very happy with that. That's exactly what I'm saying. I have never said anything else I want users to have full nodes and that is what defines security to me, not the Nakamoto coefficient. Security wouldn't even be violated even if it's one. 

[01:09:16] Anatoly: [01:09:16] Compromising on the Nakamoto coefficient, I don't think you could do that in such a way that doesn't introduce or other security assumptions that I rely on some other full node to detect.

[01:09:28] I disagree then. I think that the two are independent because we have small at-home stakers, and tons of them, which I don't think exist in Solana. And they are on the same thing as big exchanges staking, so even if one exchange unlikely I hope it's not going to happen - if one exchange did have one third of the stake, it still wouldn't stop all the rest of the network from detecting fraud and stopping it. I do not see the point.

[01:10:04] The number of physical machines on ETH-2 and Solana is about the same. It's not like 10X difference, right? It's networks are about the same size right now. 

[01:10:16]Dankrad: [01:10:16] But how a many people run Solana node not in a data 

[01:10:18] Anatoly: [01:10:18] It's their data centers, the relationship that they created and they went out and found like, there's like, look at the map. There's just weird places in like Russian and Ukraine. 

[01:10:29] Dankrad: [01:10:29] if my node is in data center the data, center controls it. Like If they wanted to, they have the key, they can sign anything they want with it. There's a legal relationship that stops that. But physically as a first point they have that control. And And a jurisdiction could force them to like some like the US could say you are not allowed to send any message that signs a transaction that comes from this account. And they could force, for example, their data centers to enforce that rule on their nodes. Now where would you be? 

[01:11:05] They could do that on home connections in the US as well. 

[01:11:09] True, but it's gonna be like, when you have distributed at home validators across the world, it's going to be a lot harder I'm not, saying it's impossible. I'm not, I'm not anyone anyone who's claiming that it's impossible to stop blockchains. I don't believe it is. But I think our design is a lot harder to in that way.

[01:11:32] Anatoly: [01:11:32] Until everybody has 1 gigabit at home, then it's the same. 

[01:11:36] Dankrad: [01:11:36] Well, and they're willing to max out that 1 gigabit just  run their node. 

[01:11:41] Anatoly: [01:11:41] There's always some people, right? And those are the people that are receiving enough of a benefit, right? To do to do that, to make that sacrifice. 

[01:11:52] Dankrad: [01:11:52] Isn't your goal to scale to many, many more transactions is the one gigabit still true there? I feel like it's going to go up a lot with your current plans. 

[01:12:02] Anatoly: [01:12:02] The current overhead has nothing to do with transaction throughput itself as a pretty low amount of overhead So at, peak, you know, theoretical limit one gigabit should give you a hundred thousand validators voting and consensus once a second. 

[01:12:19] Dankrad: [01:12:19] Well, we already have a hundred thousand validators, right? We have 200,000. 

[01:12:24] Anatoly: [01:12:24] But in different shards... Not the six days, there's a bunch of different sacrifices, which to the security question, which is what do I need to guarantee, right? Security of the network or the information flow? If you maximize the Nakamoto coefficient, I just have one full node. That's mine, it's processing all the data. I'm guaranteed to immediately know when something goes wrong. don't have that, then I'm relying on other assumptions that there's some number of honest notes I can send these fraud proofs to be.

[01:12:59] Richard: [01:12:59] Okay So I think we're now doing cross-examination of the two systems again. I just have one last question for Dankrad and then let's go to concluding remarks where we synthesize our thoughts and then talk about what kind of takeaways you had from your opponent and whether your position has been adjusted as a result.

[01:13:17] So the question for Dankrad, and we mentioned this briefly is the data availability problem. There have been comments on the Twitter thread about how, if there are fewer full nodes being run, this can be another threat to the system. So Dankrad, are you able to explain very quickly what this is? Why is this a big deal and why does having too few nodes contribute to this problem?

[01:13:40] Dankrad: [01:13:40] I'm I mentioned this before, before I tried to describe it very quickly, basically the property that erasure code, you have the data that you want to ensure availability of, and then clients can sample a very small part of that data, but because they only need to know that 50% of the data is available and not 100%, which is the case If you don't erasure code it that's enough, right? If many of these samples are available, they know that they could get the full data. Now, true this isn't true if you only had, like, let's say one full node, right? No. So let's like uh Adam and Eve, like the evil girl. And like Adam tries to sample and every time Eve replies, "Here's your samples. Here's your sample." And then she just gives them. And then after thirty samples Adam says, "Oh, it's okay." Enough of them are available, and then she stops ever giving any samples again, Adam thinks the data's available, but it's not because Eve won't give any more samples.

[01:14:37] So basically it failed of this two small a number of clients. You need the large number full nodes that will do the sampling in order to ensure that if someone like someone can't try to hide the data from them, I only just satisfying their requests and then stopping, giving the data. Because if they did that with a large number of nodes, soon as they're given out those samples they have already given away for more than 50% deal is them, they can't hide. Right. But that doesn't work. If you have two smaller number of full nodes. so I'm saying full nodes, but they could life be clients uh can also do data availability sampling and should actually in the idea. But so basically anyone who does the sampling will contribute to the security of the network.

[01:15:22] If there are enough of them, then we know that the data availability sampling has this property, that the small number of sample that you would do individually is enough to ensure that know that on the network, there will be enough samples available to reconstruct the data, always. 

[01:15:37]Anatoly: [01:15:37] But how until you actually reconstruct and validate and check. Because if I get 30 samples, that's not enough information for me to run any state transitions. 

[01:15:49] Dankrad: [01:15:49] You don't need to run a state transition, data availability sampling is only about ensuring that available itself. state transition is a completely separate problem. 

[01:15:59]Anatoly: [01:15:59] But whoever is actually detecting that fraud was created right? Like they have to reconstruct the entire block, and execute it. 

[01:16:10]Dankrad: [01:16:10] Yes. There will be nodes - so for example, if you have a roll up, there will be nodes to just follow that roll-up so they will just follow it. Like, this is a one out of N, like none would be a problem. I agree. But how likely is that? Like out of hundreds of nodes and you can have all these really nice instructions where you incentivize them, of course you have this [inaudible] validated problem, right? Well, why validate if there's never any fraud, which is hopefully the stable that fraud just doesn't happen. Right? So people start validating. Well, but we have instructions to incentivize even then proof of custody, incentivizes people to do this verification even then by either getting paid for it or by getting getting slashed if they don't do it. So we have ways of doing that. 

[01:16:54] Richard: [01:16:54] Okay. great. Okay, so let's move on. Yeah, did you want to comment? . 

[01:16:59]Anatoly: [01:16:59] All that complexity, right is a really cool attempt to build something awesome where you have these low power nodes, millions of them around the world can blink on and off and add to security. So I want to just acknowledge that what ETH-2 is working towards it's awesome, but totally different than guaranteeing information symmetry globally around the world as fast as possible. And at the end of the day, I think that that thing, right, if you build that can achieve security, that is as good or greater without any sacrifices because the only sacrifice is hardware hardware gets cheaper every two years. If you're looking at 20 years from now, I'm going to sound crazy, but we're going to be talking about one terabit internet to the home. We were talking about one gigabit to the home right now.

[01:17:55] Dankrad: [01:17:55] Yeah, but the one thing that stays constant is that we can, we can always run, like basically if you take a constant amount of bandwidth then the two construction, can scale that to a much larger computation. Whereas the Solana construction, like will always be limited by that amount of bandwidth. So there's like a constant with the same bandwidth of like, say affect of between a hundred and a thousand, I would guess, between what Solana can process and if you can process at the same bandwidth. It's just that right now, Solana is taking a few megabits have to be enough. 

[01:18:37]Anatoly: [01:18:37] Correct. Is it a question what percentage of the world's bandwidth is going to be used by is it going to be the same level as email, where there is obviously a peak in the nineties and then became a smaller and smaller part of it, but still quite an important part, right? Like in terms of the value email provides, but for the percentage of the bandwidth that it uses. 

[01:19:03] Richard: [01:19:03] Okay, great. Let's move on to concluding remarks. Dankrad, are you able to synthesize your thoughts? I feel that earlier you already talked about an important point, which is Anatoly acknowledging the fact that running full nodes is important, especially for those that care about security. So maybe just elaborate on that point, add onto it. And mention if your position has been adjusted somewhat, if any, which I doubt. ahead.

[01:19:32]Dankrad: [01:19:32] Yeah as I said, our positions are very, very close in the terms that if you want security, like you will run full nodes and I'm very happy with that. I agree with that this is the model for the future Obviously we all have a lot of work to do on getting that, making that possible But happy with that conclusion and I'll leave it to Anatoly. 

[01:19:57] Richard: [01:19:57] So, but you also don't disagree with the idea though that maximizing the Nakamoto coefficient is a paramount priority for security.

[01:20:07] Dankrad: [01:20:07] Oh, oh, I do. I do disagree with that and that I still think it is important. It's super important, but it is important for censorship resistance, which the second property, which of course like we want, we really want it, I think isn't really that valuable or maybe let's move it a bit closer to Anatoly's position is what I expected to be.

[01:20:32] I at least think it's not that good for, storing and holding value, this is to be built on a base of security. Like the security is the foundation. You need that first, and then you build a censorship resistant system on top of that. And without the security, the censorship resistance system just the root of the value, like where you have your root token contracts and so on. It may be a very different proposition if you're talking about, for example, but I think I'm probably optimizing for trading systems and so on. Then you might be much happier to some of that security in order to get this very fast. 

[01:21:12]Richard: [01:21:12] Okay. Thank you, Dankrad. Anatoly?

[01:21:16] Anatoly: [01:21:16] So I, I think that the sacrifice is in the cost of the hardware, because I agree, with Dankrad that you need a full note to go to validate, and that's really the ultimate security the system comes from that one full node, but by maximizing the Nakamoto coefficient, the side effect of that, is that you have more full nodes, and you also guarantee the censorship resistant piece which means that detection of that all hell breaks loose event is much, much simpler to do and faster, you don't have complex constructions, right? You just have full nodes that process all the data, that are pay the price of that, right? I love these debates. I genuinely believe that the approach that ETH-2 has taken is really interesting. Maybe that's where like the root of tokens, like the settlement part of them occurs.

[01:22:12] But there's another interesting aspect, which is what we're obsessed with. It's trade and move and create/ synthesize all the world's information right into one spot. And that's just a different problem that kind of interesting to see how those two obsessions led to two different paths. 

[01:22:33] Richard: [01:22:33] I feel that there's points in there that Dankrad might want to follow up, but then we would just not be able to finish the debate. Thank you both so much.

[01:22:43] Anatoly: [01:22:43] I under-estimated that this is a three beer debate, is more like a six beer.

[01:22:49]Richard: [01:22:49] Yeah! Well, thanks for joining the debate today, Anatoly and Dankrad. How can our listeners find both of you, starting with Anatoly?

[01:22:55]Anatoly: [01:22:55] Just follow me on Twitter. I think that's the easiest way to connect. 

[01:22:59] Dankrad: [01:22:59] Yeah, same answer, really like Twitter. It's great. I also have a blog where I write about these things recently actually sparked by a similar debate, also wrote a post that goes right into this about 51% attacks, so you're interested. 

[01:23:14]Richard: [01:23:14] Can you say the website?

[01:23:15]Dankrad: [01:23:15] Its my name, DankradFeist.de. 

[01:23:19]Richard: [01:23:19] So listeners, we would love to hear from you and have you join the debate via Twitter. Definitely vote in the post debate poll. Also feel free to join our conversation with your comments on Twitter. We look forward to seeing you in future episodes ofThe Blockchain Debate Podcast. Consensus Optional, Proof of Thought Required. Thank you both for coming on. This has been great.

[01:23:35] Dankrad: [01:23:35] Thank you.

[01:23:39]Anatoly: [01:23:39] Thanks.

[01:23:39]Richard: [01:23:39] Thanks again to Anatoly and Dankrad for coming on the show. I think there's some nice convergence from the guests towards the end of the discussion. 

[01:23:46] To follow up, there are two articles worth checking out. One is by Kyle at multipoint and a big Solana backer and advocate. The article talks about why Kyle thinks Solana provides sufficient decentralization, despite the higher costs in running a Solana full node, as compared to a Bitcoin full node or an Ethereum full node. The article is called "Technical scalability creates social scalability." And it's linked in the show notes.

[01:24:09] The other article is "What everyone gets wrong about 51% attacks," written by our guest today, Dankrad Feist. It outlines the kinds of security full nodes bring to the network, which is basically his views today. This article also simultaneously acknowledges what kinds of attacks full nodes will be unable to stop. This article will be linked in the show notes also.

[01:24:31] What was your takeaway from the debate? Don't forget to vote in our post debate Twitter poll. This will be live for a few days after the release of this episode, and feel free to say hi or post your feedback for our show on Twitter. If you like the show, don't hesitate to give us five stars on iTunes wherever you listen to this. That will help me grow the show and reach more audience members.

[01:24:49] And be sure to check out our other episodes with a variety of debate topics: Bitcoin' store of value status, the legitimacy of smart contracts, DeFi, POW vs POS, the case for government bailouts, central bank digital currency, and so on.

[01:25:02] Thanks for joining us on the debate today. I'm your host Richard Yan, and my Twitter is @ gentso09. Our show's Twitter is @blockdebate. See you at our next debate!