Skip to content

Top AI systems from Google and OpenAI outperform top-tier high school students in the most challenging mathematics competition globally.

In the IMO 2025, AI models developed by OpenAI and Google distinguished themselves, outshining top human competitors and specialized systems with their exceptional performance.

Top AI models from Google and OpenAI surpass mathematical abilities of various teenage contestants...
Top AI models from Google and OpenAI surpass mathematical abilities of various teenage contestants in the most challenging global math competition

Top AI systems from Google and OpenAI outperform top-tier high school students in the most challenging mathematics competition globally.

At the 2025 International Mathematical Olympiad (IMO), AI models from both Google DeepMind and OpenAI demonstrated remarkable performance, achieving gold medal-level scores by solving five out of six problems[1][2][3][4][5].

Google DeepMind, with its AI model, competed officially in the IMO and had its results certified by the organisers, marking a significant milestone as the first AI to be officially recognised with a gold medal-level score[1][2][5]. OpenAI, on the other hand, did not participate officially but independently evaluated its model on the same IMO problems under controlled conditions and released its matching gold-level results before the official confirmation[1][2][3].

Both AI systems completed the exam under strict conditions that mirrored those for human contestants: two 4.5-hour exam sessions without internet access or external aids, producing full natural-language mathematical proofs for each problem, which were reviewed and scored by expert IMO graders (including former medalists)[1][3][5].

The IMO is renowned as a premier benchmark of mathematical skill for elite high-school students worldwide, involving exceptionally challenging problems[1][2]. This achievement reflects significant advances in AI reasoning and problem-solving abilities, demonstrating how far language models and specialized AI systems have progressed since the debut of ChatGPT in 2022 and DeepMind's earlier projects like AlphaGo and AlphaCode[3][4].

However, it's worth noting that a few human participants still outscored the AI, achieving full marks, indicating that a gap still exists between human and machine problem-solving capabilities[5].

| Aspect | Google DeepMind | OpenAI | |------------------------------------|------------------------------------------|---------------------------------------------| | Entered IMO officially? | Yes | No (self-evaluated independently) | | Score | 35/42 points (5/6 problems solved) | 35/42 points (5/6 problems solved) | | Medal Level | Gold (officially certified by IMO) | Gold (self-reported, aligned with IMO standards) | | Exam conditions | Same as human contestants, no external aids | Same as human contestants, no external aids |

The success of these AI models at the IMO signifies a milestone in artificial intelligence's ability to tackle highly abstract and rigorous mathematical reasoning, narrowing the divide between human and machine mathematicians[1][3][5]. Notably, OpenAI declared its claim to the gold medal before the official results were revealed by the IMO.

[1] https://www.deepmind.com/research/publications/2025-imo [2] https://openai.com/blog/imo-2025 [3] https://www.imomath.com/news/ai-takes-home-gold-medal-at-the-2025-imomath-olympiad [4] https://www.nature.com/articles/d41586-025-02840-z [5] https://www.sciencemag.org/news/2025/12/ai-models-win-gold-medals-international-mathematical-olympiad

The success of both Google DeepMind and OpenAI at the 2025 International Mathematical Olympiad (IMO) showcases innovation and advancements in robotics, science, and technology, as they demonstrated gold medal-level performance. These AI systems not only showcased their problem-solving abilities in complex mathematics but also contributed to the ongoing debate about the ever-evolving role of technology in society.

Read also:

    Latest