Evolving program here:
Jennifer Balakrishnan (Boston University)
Quadratic Chabauty in higher genus
Abstract
Barinder Banwait
Abstract
Mazur's celebrated 1978 theorem determines the complete list of primes $p$ such that some elliptic curve over $\mathbb$ admits a rational $p$-isogeny. The natural strong uniformity analogue over degree-$d$ number fields — for each $d \geq 2$, determine the (conjecturally finite) set of primes arising as a $k$-rational isogeny degree for some elliptic curve over some number field $k$ of degree $d$ — is an open problem, already out of reach for $d = 2$ and $d = 3$. Over odd-degree fields the usual "non-CM" caveat is automatic, making $d = 3$ a particularly clean target.In joint work with Maarten Derickx, we have developed an open-source tool, isogeny-primes, which for any number field $K$ computes a provably correct superset of its isogeny primes. Sweeping it across the cubic fields in the LMFDB yields a concrete finite upper bound for strong uniformity in degree 3. The remaining task is realization: for each candidate prime $p$, either exhibit a cubic field $K$ and an elliptic curve $E/K$ witnessing $p$ as an isogeny degree, or rule it out — equivalently, decide which $p$ admit a non-cuspidal cubic point on the modular curve $X_0(p)$.
In this talk I'll describe an exploratory attempt to engage with this realization problem using contemporary AI tools: both as coding assistants for the classical computational attack on $X_0(p)$ (Chabauty–Coleman, Mordell–Weil sieve, explicit descent), and as lightweight predictors to help triage which candidate primes to pursue. I will present initial findings and some reflections on what this style of approach currently can and cannot contribute to problems of this flavour.
Gergely Berczi (Aarhus University)
A short tale of Stirling coefficients for symmetric powers
Abstract
Deines Alyson (CCR La Jolla)
Abstract
This talk will focus on the transfer learning of rational L-functions, specifically between classical modular forms, Bianchi modular forms, Hilbert modular forms, and genus 2 curves over the rationals. In these cases transfer learning (LDA, SVN, Neural Nets) performs quite well. However, when the LMFDB was updated last fall giving two more Bianchi modular forms, performance in one of the areas tanked: using LDA trained on L-functions originating from Bianchi modular forms and tested on the other types. In this talk we explore this anomaly.
Jordan Ellenberg (University of Wisconsin)
1-Human-machine iteration I: introduction
Abstract
I will introduce the mechanisms of PatternBoost and funsearch/AlphaEvolve and talk about my own experiences using these two protocols to produce material of mathematical interest. In all cases, the experience is one of cooperation between traditional and ML methods, often with repeated back and forth between the two. In this talk I'll also try to give a sense of which problems seem best suited to these methods as they exist in 2026. (Perhaps our work this week will broaden my answer to this questi
2-Human-machine iteration II: generalization
Abstract
I'll talk about the question of generalization: to what extent can we use ML methods to help us generate not only examples for single instances of a problem (e.g. a large capset in dimension 8) but to help with families of examples that would change our knowledge for the general case of a problem (e.g. an algorithm that for each n yields a large capset in dimension n.) I'll talk more specifically about a recent result using AlphaEvolve which ends in a theorem about large hypercubes in the Bruhat order on S_n, for arbitrary n.
3-Summarizing our successes and failures
Kyu-Hwan Lee (University of Connecticut)
Abstract
In this talk, I will discuss how one can interpret what a transformer has learned after training. The presentation will be based on two papers: arXiv:2502.10357 and arXiv:2511.12421.
Abbas Mehrabian (Google DeepMind)
AlphaEvolve user interface
Abstract
Tom Oliver (University of Westminster)
Data Representation in Number Theory
Abstract
Alexey Pozdnyakov (Princeton University)
Three lessons from the murmurations discovery
Abstract
Andrew Sutherland (MIT)
Abstract
I will demonstrate two new tools that leverage the capabilities of generative AI to allow researchers to explore questions in number theory (and other fields) with much greater ease and flexibility than was previously possible. The first tool is AlphaEvolve and the second tool is a new LMFDB-MCP connector. I will give live demonstrations of both tools that I hope will facilitate exploration and discovery by workshop participants, both during and after the workshop.





