Fields Model logo
The gold standard for mathematical AI

Fields Model Initiative

Training LLMs for mathematics that converge to Fields Model performance

Apply for Compute →Our Vision

Our Vision

The Fields Model Initiative accelerates open-source research on mathematical LLMs by providing large-scale compute (128+ H100 GPUs) to vetted projects in exchange for open mathematical data and open-source artifacts. Researchers apply with a proposal, contribute high-quality math datasets under an open license, and use our GPUs to train models whose outputs are shared with the community. Our long-term goal is an LLM that can genuinely assist mathematicians across all domains, ultimately approaching Fields Medal–level capability.

Apply Data Train
STEP 01

Apply

Researchers apply with a proposal for compute. This can range from finetuning LLMs on mathematical data, to analysing LLM checkpoints to understand reasoning better. We vet your proposal and decide whether to give access.

STEP 02

Data

As a "fee" to get access to the compute offered by the Fields Model Initiative, you need to provide data points in mathematics as directed by us, under an open licence.

STEP 03

Train

We provide you access to our GPUs. You make your artefact, which will benefit mathematicians, public under an open-source licence.

More about our vision

We ensure that all artefacts that the community generates follow the same vision of converging to an LLM that is able to provide genuine assistance to mathematics. At the same time, the "data fee" ensures that novel domains of mathematics for which currently no coverage exists, receives data coverage, so that, long-term, a model trained on the growing dataset will converge to perform mathematics on the level of a Field Medalist.

Apply Now

All applications will be reviewed by our team and must be made in a specified format. This is to ensure that access to hardware resources is as smooth as possible and no exciting idea is left behind. You must also agree that the data artefact you produce will be available under an open-source licence.

We have a standardized process for submissions.

About

Founders

Simon Frieder

B+B, LLMC

simon@bb.ai
Pontus Stenetorp

NII, LLMC

pontus@nii.ac.jp

Members

Philip Vonderlind

B+B, AIMO

philip.vonderlind@aimoprize.com
Michal Štefánik

National Institute of Informatics (NII)

michal@nii.ac.jp

Partner

Our connections will help you to achieve your research goals.

AIMO Prize

Contestants of the AIMO Prize will be able to train models on SotA hardware with our initiative!

For systematic collaborations, reach out to us at simon.frieder@aimoprize.com

Supporters

B + B

Benchmarks + Baselines

An AI non-profit founded by Simon Frieder with a strong focus on developing benchmarks for SotA AI systems.

NII

National Institute of Informatics | Research and the Development Center for LLMs

Pontus Stenetorp's group and the LLMC provides access to the GPU resources of the NII's research cluster, with a large quantity of SotA hardware.