Jiangjie Chen

Jiangjie Chen

Ph.D. Candidate

Fudan University


Jiangjie Chen (陈江捷) is a fourth-year Ph.D. candidate at Fudan University in School of Computer Science, Shanghai, China, where he is advised by Prof. Yanghua Xiao at Knowledge Works Lab.

He is devoted to reasoning over natural language and making machines being right for the right reasons. His main interested research topics include (but not limited to):

  1. Machine Reasoning, especially on endowing various kinds of human-like reasoning abilities to large language models, including analogical reasoning, counterfactual reasoning, decision-making, language planning, etc.;
  2. Text Generation, especially on the building of factual, faithful, controllable, and knowledge-guided text generation techniques upon large language models;
  3. The intersect of machine reasoning and text generation, i.e., understanding and achieving machine reasoning with the vehicle of natural language.

( Download my resumé. Could be outdated. 😶)

  • Large Language Models
  • Machine Reasoning
  • Applications of LLMs
  • Mountaineering 🧗‍♂️
  • Tennis 🎾 (3.0)
  • Musicals
  • Ph.D. in CS, 2019 - 2024 (estimated)

    Fudan University

  • B.S. in CS (honors), 2014 - 2019

    Fudan University


  • June 2023: Coming to Seattle for a summer internship at Allen Institute for AI, working with the great Aristo Team!

  • May 2023: Check out two pre-prints on Analogical Reasoning, which extend E-KAR! AnalogyKB is a million-scale analogy KB derived from existing KGs, to enable machines to achieve analogical reasoning skills. SCAR is a new challenge for evaluating LLMs’ structure abduction ability for scientific analogies, which is essential for human-like analogical reasoning.

  • May 2023: Got two papers about LLMs accepted to the main conference of ACL 2023! One is about analyzing why LLMs fail to generate negative knowledge while being able to recognize them. The other is CoScript, studying how to generate plans under constraints with LLMs. See you in Toronto!

  • Feb. 2023: Presenting VENCE on AAAI 2023!

  • Nov. 2022: Two papers accepted to AAAI 2023! One is VENCE on correcting factual errors in texts, and the other is NEON on explaining why a statement is false: both focus on solving the tasks without direct supervision. Welcome to check it out!

  • Oct. 2022: Gave a talk at MSRA.

  • Oct. 2022: I was awarded with China National Scholarship for Doctoral Students.

  • Sept. 2022: Just married💕!

  • Sept. 2022: We officially release a new version of the E-KAR dataset (v1.0 -> v1.1), with a substantially improved English dataset! Over 600 problems and 1,000 explanation texts are manually adjusted, and we are as strict as we can! See more information at the E-KAR project page. Have fun!

  • July 2022: Talk titled “Right for the Right Reasons: Explainable Reasoning on Analogical Recognition and Fact Verification” (in Chinese).

  • July 2022: ACT for NAT will be presented at NAACL-HLT 2022.

  • May 2022: E-KAR will be presented at the Commonsense Representation and Reasoning (CSRR) workshop at ACL 2022, discussions welcomed!

  • May 2022: E-KAR will be presented at ACL 2022 (virtually) in a poster session, welcome to check it out!

  • Apr. 2022: Our paper (ACT) on non-autoregressive translation got accepted at NAACL-HLT 2022!

  • Mar. 2022: The leaderboard of E-KAR has been released at EvalAI! Welcome to participate!

  • Mar. 2022: Our work LOREN received the attention of WikiResearch Team 🧐, here’s the tweet.

  • Feb. 2022: Giving oral & poster presentations about LOREN and EDUCAT at AAAI 2022 virtual conference.

  • Feb. 2022: Our paper (E-KAR) on analogical reasoning got accepted at ACL 2022 (Findings)!


Allen Institute for AI
Research Intern
Allen Institute for AI
Jun 2023 – Present Seattle, Washington, U.S.
Aristo Team, mentored by Dr. Kyle Richardson.
ByteDance AI Lab
Research Intern
ByteDance AI Lab
Nov 2019 – May 2023 Shanghai, China
Knowledge-guided text generation and natural language reasoning, working with Prof. Lei Li, Prof. Hao Zhou, and Dr. Changzhi Sun.
Knowledge Works Lab (KW, 知识工场) at Fudan University
Student Researcher
Knowledge Works Lab (KW, 知识工场) at Fudan University
Apr 2017 – Present Shanghai, China
Knowledge graph, text generation and reasoning, advised by Prof. Yanghua Xiao.


China National Scholarship for Doctoral Students
Honor Student Award in Computer Science of Top Talent Undergraduate Training Program

Recent Publications

Quickly discover relevant content by filtering publications.
(2023). Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction. Preprint.


(2023). AnalogyKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base. Preprint.

PDF Code

(2023). Distilling Script Knowledge from Large Language Models for Constrained Language Planning. In The 61th Annual Meeting of the Association for Computational Linguistics (ACL 2023).

PDF Code

(2023). Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge. In The 61th Annual Meeting of the Association for Computational Linguistics (ACL 2023).

PDF Code

(2022). Converge to the Truth: Factual Error Correction via Iterative Constrained Editing. In The 37th AAAI Conference on Artificial Intelligence (AAAI 2023).

PDF Cite Code Poster Slides