Research Scientist @ NVIDIA

Yang Chen

Yang Chen

I am a Research Scientist @ NVIDIA (ADLR Group). I received my Ph.D. from Georgia Tech in August 2024.

My current research interests lie in scaling reinforcement learning for reasoning.

AceReason-Nemotron 1.1

June 2025

Building on prior work, I contributed to research combining large-scale supervised fine-tuning with RL. We released the SOTA 7B model -- AceReason-Nemotron-1.1-7B.

AceReason 1.1 overview

AceReason-Nemotron

May 2025

I co-led the effort to scale the previous work to both math and code domains. We released the SOTA medium size model -- AceReason-Nemotron-14B.

AceReason overview

AceMath-RL-Nemotron

April 2025

I led the research on scaling RL to train LLMs to solve math problems. We trained and released the AceMath-RL-Nemotron-7B model.

AceMath RL diagram

AceMath

December 2024

I co-led the research on developing math reasoning and reward models. We released the AceMath-1.5B, 7B, and 72B models along with AceMath-RM-7B and 72B, which surpass GPT-4o and Qwen2.5-Math at the time.

AceMath visual

Past Work

Research · 2022 - 2024

Multimodal LLM

Building AI that understands the visual world

Research · 2019 - 2024

Multilingual LLM

Bridging representation across global languages