Codex and Ethereum Foundation kick-off!

Codex and Ethereum Foundation kick-off!

One small step for Ethereum's roadmap, one grand step for Codex

The Ethereum Foundation (EF) and Codex collaborate to solve overlapping research challenges regarding the scaling of our network infrastructure. Codex has received a grant from the Ethereum Foundation ESP to contribute research on the challenge of proving data availability.

The proof of data availability is essential to independently verify blocks without nodes needing to download and process the entire block data. This guarantee is critical for scaling and the reliability of infrastructure (L2 rollups, light nodes, etc.).


Learn more about this problem:
The Data Availability Problem (2017)
Fraud & Data Availability (2019)
Data Availability (2023)

Data Availability Sampling Research

Codex will contribute to solving these problems by working on research and experiments involving Data Availability Sampling (DAS).

This DAS method entails many nodes individually obtaining a number of random block samples. If a node requests several (e.g., 73) samples and receives all of them correctly, there’s a high probability that the block is correct and available.  

(Some of) the math behind this:

Imagine that the network loses 50% of its samples; the other 50% are available.

Querying a random sample to the Distributed Hash Table (DHT) is the equivalent of a coin toss. Our odds are 50% heads (available) versus 50% tails (unavailable). If you do two random queries, the probability of two consecutive tails decreases to 25%. For three random queries, the probability falls to 12.5%, and for 30 queries, it goes all the way down to 0.0000000931%

Thus, with enough random sampling, we can obtain enough assurance that this data is correct or incorrect without requiring each one of the nodes to obtain all the data!

Curious to learn more?
Data Availability Sampling: From Basics to Open Problems (2022)
Data Availability Sampling from the Networking Perspective (2023, video)

Those about to research salute you.


Codex is a decentralized storage network that focuses on maintaining durability. After conversations with the EF and keeping the open request-for-proposals in mind, we will assist on five research lines:

  • Dissemination of rows and columns to validators
  • Dissemination of the samples/seeding of the DHT
  • Sample queries
  • DHT data reconstruction
  • Attacks Analysis

Team

To battle these challenges, we have a top-tier team consisting of the following:

Leonardo Bautista-Gomez has over a decade of experience working on reliability for supercomputers and implemented multilevel checkpointing libraries with their own distributed RS encoding. Bautista-Gomez has won multiple awards, including the IEEE TCSC Award for Excellence in Scalable Computing.

Csaba Kiraly has worked on various aspects of overlay and mesh networks for more than 15 years. Large P2P overlays and wireless mesh networks of all kinds have few remaining secrets to him. Kiraly is also the co-author of several successful EU Project grants and winner of several IEEE Best Paper and Best Demo awards.

Dmitriy Ryajov has an extensive engineering background spanning over 20 years. He’s the founder of Codex and its technical lead. He’s been involved with various P2P systems and was one of the initial IPFS and libp2p maintainers, where, among other things, he co-authored the circuit relay specifications. Later, he acted as a lead P2P engineer with the Mustekala project and became the leader of the nim-libp2p implementation used by Nimbus.