Skip to content

Research Scientist @ NVIDIA

Yang Chen

Yang Chen

I am a Research Scientist at NVIDIA (ADLR Group). I received my Ph.D. from Georgia Tech in August 2024.

I currently work on scaling reinforcement learning for reasoning LLMs.

Selected Projects

Nemotron-Cascade December 2025

We scale Cascade RL to train general-purpose reasoning LLMs spanning RLHF, instruction following, math, code, and SWE. Our 14B Thinking model achieves SOTA on LiveCodeBench—outperforming Gemini-2.5-Pro, o4-mini, Qwen3-235B, DeepSeek-R1-671B, and reaches silver-medal performance at IOI 2025.

CascadeRL overview
AceMath December 2024

We worked on developing math reasoning and reward models, and released AceMath-1.5B, 7B, and 72B, along with AceMath-RM-7B and 72B, which surpassed GPT-4o and Qwen2.5-Math at the time.

AceMath visual

Past Work

Research · 2022–2024

Multimodal LLM

Building AI that understands the visual world

Click to expand

Research · 2019–2024

Multilingual LLM

Bridging representation across global languages

Click to expand