Since June 2023, Albert Q. Jiang has been a Research Scientist at Mistral AI, where his team focuses on the science and infrastructure of reasoning. His long-term research objective is the development of a mathematical superintelligence that is safe and aligned by construction. In pursuit of this goal, Albert has contributed to several frontier projects in large language models and reasoning systems, including pretraining data initiatives such as Mixtral of Experts and Mistral 7B in 2023, mid- and post-training research with Mathstral in 2024, and large-scale reinforcement learning efforts through Magistral in 2025.
Albert completed his PhD at the University of Cambridge Computer Laboratory under the supervision of Mateja Jamnik and Wenda Li. His thesis was examined by Jeremy Avigad and Ferenc Huszár in October 2024, and he successfully passed with no corrections required.
His doctoral research focused on learning abstract mathematical reasoning with language models. His work explored the autoformalization of theorems and proofs, including the development of large parallel datasets for statement autoformalization such as Multilingual Mathematical Autoformalization (MMA). Albert also worked on integrating and improving premise selection tools using language models, studied human–AI interaction in mathematical problem-solving, and investigated mathematical conjecturing as a step toward more advanced forms of machine reasoning.
https://albertqjiang.github.io/
https://scholar.google.com/citations?user=Fe_RBHMAAAAJ&hl=en