Jiangjie Chen

Jiangjie Chen

Researcher

ByteDance

Biography

Jiangjie Chen (陈江捷) is a researcher at ByteDance Seed Team. In 2024, he earned his Ph.D. at Fudan University in the School of Computer Science, Shanghai, China. His current interested research topics are mostly around building reasoning models and autonomous agents:

  1. Reasoning Models: Advancing research on incentivizing and understanding advanced reasoning and planning capabilities from large models.
  2. Autonomous Agents: Developing advanced methods for autonomous, trustworthy, and personalized agents. This extends towards the exploration of their interactions with multiple agents and real environments.
Interests
  • Large Language Models
  • Reasoning
  • Mountaineering 🧗‍♂️
  • Tennis 🎾
  • Musicals
Education
  • Ph.D. in CS, 2019 - 2024

    Fudan University

  • B.S. in CS (honors), 2014 - 2019

    Fudan University

News

  • Apr. 2025: Presenting Seed-Thinking-v1.5 from ByteDance Seed Team, a cutting-edge reasoning model that’s incredible in math, code, science, and logical reasoning!

  • Mar. 2025: DAPO is out! A new critic-free RL algorithm that directly trains a pre-trained base model to SoTA performance on AIME 2024 without any SFT.

  • Mar. 2025: Four papers accepted to NAACL 2025: SelfGoal, EvoAgent, EasyTool and Barrier in Language Agent Planning.

  • Oct. 2024: Three papers accepted to NeurIPS 2024 Workshop on Open-World Agents: EvoAgent, SelfGoal and AucArena. See you in Vancouver!

  • Sep. 2024: Our survey paper on role-playing agents is accepted to TMLR!

  • Sep. 2024: We have three accepted papers in EMNLP 2024! Two main papers are Segment+ on long-context processing with short-context models, and CROSS on role-playing evaluation, and one finding paper DetectBench on benchmarking detective reasoning.

  • Jul. 2024: Our work on Irrelevant Evidence got accepted in COLM 2024!

  • Jul. 2024: I have graduated from Fudan University, and will officially join ByteDance Seed Team as a Full-time researcher.

  • Jun. 2024: How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability? We propose EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm. EvoAgent can be generalized to any LLM-based agent framework, and significantly enhance the task-solving capabilities of LLM-based agents!

  • Jun. 2024: Want your agents to win an auction for you? But does your agent know what it means by such a vague and high-level goal as “winning an auction”? Check out SelfGoal! We propose an automatic approach that enhances language agents’ capabilities to achieve high-level goals with limited instructions and delayed feedback by adaptively breaking down goals into practical subgoals. Really excited about automating agents to do high-level task with minimal human instructions!

  • Jun. 2024: TravelPlanner got a Spotlight recommendation at ICML 2024!

  • May 2024: Just defended my thesis, officially a Dr. :)

  • May 2024: Four papers are accepted to the main conference of ACL 2024! They are: TimeArena, AnalogyKB, InCharacter and GumbelSoft! See you in Bangkok :)

  • May 2024: Our TravelPlanner got accepted to ICML 2024!

  • Apr. 2024: The first Survey on Role-Playing Agents is out! Dive into our comprehensive survey of RPLA technologies, their applications, and the exciting potential for human-AI coexistence. Understanding role-playing paves the way for both personalized assistants and multi-agent society. Check our latest survey on role-playing agent!

Experience

 
 
 
 
 
ByteDance
Researcher
ByteDance
Jul 2024 – Present Shanghai, China
 
 
 
 
 
Allen Institute for AI
Research Intern
Allen Institute for AI
Jun 2023 – Sep 2023 Seattle, Washington, U.S.
Aristo Team, mentored by Dr. Kyle Richardson. Responsibilities: Work on multi-agent reasoning and planning with large language models.
 
 
 
 
 
UC Santa Barbara
Visiting Research Intern
UC Santa Barbara
Sep 2021 – May 2023 Remote
Hosted by Prof. Lei Li. Responsibilities: Work on machine reasoning over language with large language models.
 
 
 
 
 
ByteDance AI Lab
Research Intern
ByteDance AI Lab
Nov 2019 – May 2023 Shanghai, China
Mentored by Prof. Lei Li, Prof. Hao Zhou, and Dr. Changzhi Sun. Work on Knowledge-guided text generation and natural language reasoning.

Awards

Excellent Graduates of Shanghai
ACL 2023 Outstanding Paper Award
China National Scholarship for Doctoral Students
Honor Student Award in Computer Science of Top Talent Undergraduate Training Program

Recent Publications

Quickly discover relevant content by filtering publications.
(2025). DAPO: An Open-source LLM Reinforcement Learning System At Scale. Preprint.

PDF Cite Dataset Project Code Model

(2025). PowerAttention: Exponentially Scaling of Receptive Fields for Effective Sparse Attention. Preprint.

PDF Cite Code

(2025). DEEPER Insight into Your User: Directed Persona Refinement for Dynamic Persona Modeling. Preprint.

PDF Cite Code

(2025). CoSER: Coordinating LLM-Based Persona Simulation of Established Roles. Preprint.

PDF Cite Code

Visitors