NVIDIA Team Scores Kaggle Win With Reasoning Model
From NVIDIA: 2025-04-15 14:00:00
Team NVIDIA competed in the AI Mathematical Olympiad, submitting AI reasoning models to Kaggle to solve 50 complex math problems. The team passed the baton across the U.S. and Europe, eventually achieving the top leaderboard spot by correctly answering 34 questions using NVIDIA L4 GPUs. Their winning model used natural language reasoning and Python code execution.
The NemoSkills team utilized the NeMo-Skills collection for accelerated Large Language Model (LLM) training in the Kaggle challenge. Their winning model, Qwen2.5-14B-Base, was fine-tuned on synthetic solutions generated by larger reasoning models, resulting in a faster, long-thinking model capable of complex problem-solving using natural language reasoning and Python code execution.
To optimize performance, the team reasoned through multiple responses in parallel and utilized early-stopping techniques with NVIDIA TensorRT-LLM. FP8 quantization and ReDrafter decoding techniques were used to speed up inference, resulting in a successful model that performed well on unseen data and avoided overfitting.
Team NVIDIA plans to release a technical report detailing their winning solution techniques, share their dataset and models on Hugging Face, and integrate advancements into NeMo-Skills pipelines. Their collaboration across the NVIDIA software stack led to the development of the NVIDIA Llama Nemotron Ultra model, showcasing their dedication to improving AI reasoning models for math.
Christof Henkel, a member of the Kaggle Grandmasters of NVIDIA, regained the title of Kaggle World Champion after the competition win. The team also directed their $262,144 prize to the NVIDIA Foundation to support charitable organizations. Their achievements highlight the team’s commitment to advancing AI reasoning models and pushing optimizations into NVIDIA’s open-source libraries.
Read more at NVIDIA: NVIDIA Team Scores Kaggle Win With Reasoning Model