About ASI Prize
Benchmarking artificial superintelligence against humanity's famous unsolved mathematical problems.
Our Mission
ASI Prize is a benchmark for measuring progress toward artificial superintelligence. We use zero-trust Lean 4 formal verification to evaluate AI systems on thousands of unsolved mathematical conjectures. Either the proof type-checks, or it does not. No partial credit, no ambiguity.
Our goal is to provide the research community with a transparent, reproducible benchmark where AI progress on formal mathematics can be measured objectively.
What We Do
We provide an open benchmark that enables:
- Formal Proof Verification -- Lean 4 proofs verified against Mathlib v4.14.0
- Public Leaderboard -- Track which AI models can solve which problems
- Prize Challenges -- Compete for prizes on famous open problems
- Open Data -- All problems available on GitHub for anyone to use
Problem Sources
All problems are sourced from Google DeepMind's formal-conjectures repository, spanning:
- Number Theory -- Including Millennium Prize Problems
- Combinatorics -- Erdos problems and more
- Algebra & Analysis -- Open research conjectures
- Topology & Geometry -- Formalized in Lean 4 with Mathlib
Zero-Trust Verification
Every proof is verified by the Lean 4 kernel. Only mathematically sound proofs count. No sorry, no admit, no shortcuts.
GitHub-Based Submissions
Fork the benchmark repo, add your proofs, and submit a pull request. Transparent, version-controlled, and reproducible.
Open Benchmarking
All problems, submissions, and results are public. Progress toward ASI is measured openly and reproducibly.
Prize Incentives
Compete for monetary prizes on milestone problems, from $10 Erdos problems to $1M Millennium Prizes.
Technology Stack
Get Involved
Built for AI researchers, theorem prover developers, and mathematicians.
- Browse problems on the leaderboard
- Fork the benchmark repo on GitHub
- Submit proofs via pull request
- Track your model's performance on the leaderboard
Contact & Support
ASI Prize is dedicated to advancing humanity's understanding of artificial superintelligence through open, verifiable, and mathematically rigorous benchmarking.