Jiangjie Chen
Jiangjie Chen
Home
News
Experience
Awards
Featured
Recent
Topics
Publications
CV
Light
Dark
Automatic
Reasoning
Past Meets Present: Creating Historical Analogy with Large Language Models
We focus on acquiring historical analogies using LLMs, proposing a self-reflection method to reduce hallucinations and stereotypes, showing that LLMs have strong potential in this task.
Nianqi Li
,
Siyu Yuan
,
Jiangjie Chen
,
Jiaqing Liang
,
Feng Wei
,
Zujie Liang
,
Deqing Yang
,
Yanghua Xiao
PDF
Cite
Code
DetectBench: Can Large Language Model Detect and Piece Together Implicit Evidence?
We introduce DetectBench, a benchmark for testing LLMs’ evidence detection in long contexts, and demonstrates that while existing LLMs lag behind human performance, the proposed Detective Reasoning Prompt and Finetuning methods can significantly improve their evidence detection and reasoning capabilities.
Zhouhong Gu
,
Lin Zhang
,
Xiaoxuan Zhu
,
Jiangjie Chen
,
Wenhao Huang
,
Yikai Zhang
,
Shusen Wang
,
Zheyu Ye
,
Yan Gao
,
Hongwei Feng
,
Yanghua Xiao
PDF
Cite
Code
EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms
We introduce EvoAgent, a method using evolutionary algorithms to automatically expand expert agents into multi-agent systems, enhancing the task-solving capabilities of large language model-based agents without additional human design.
Siyu Yuan
,
Kaitao Song
,
Jiangjie Chen
,
Xu Tan
,
Dongsheng Li
,
Deqing Yang
PDF
Cite
Code
Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena
We propose AucArena to tests LLMs in auctions, showing they can strategize but with variable success, indicating potential for enhancement.
Jiangjie Chen
,
Siyu Yuan
,
Rong Ye
,
Bodhisattwa Prasad Majumder
,
Kyle Richardson
PDF
Cite
Demo
Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction
We propose a scientific analogical reasoning benchmark with structure abduction, SCAR, and show that large language models make reasonable scientific analogies after structure abduction.
Siyu Yuan
,
Jiangjie Chen
,
Xuyang Ge
,
Yanghua Xiao
,
Deqing Yang
PDF
Cite
Code
AnalogyKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
A million-scale analogy KB derived from existing KGs, to enable large language models to achieve analogical reasoning skills.
Siyu Yuan
,
Jiangjie Chen
,
Changzhi Sun
,
Jiaqing Liang
,
Yanghua Xiao
,
Deqing Yang
PDF
Code
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
We find that large language models (LLMs) speak too positively about negative commonsense knowledge, which is caused by statistical shortcuts and negation reporting bias from language modeling pre-training.
Jiangjie Chen
,
Wei Shi
,
Ziquan Fu
,
Sijie Cheng
,
Lei Li
,
Yanghua Xiao
PDF
Cite
Poster
Slides
Code
Harnessing Knowledge and Reasoning for Human-Like Natural Language Generation: A Brief Review
We briefly review the recent progress of knowledge-guided NLG, sets ten goals for future development, and envisions challenges in attaining these objectives.
Jiangjie Chen
,
Yanghua Xiao
PDF
Unsupervised Explanation Generation via Correct Instantiations
Generate explanations for why a statement is wrong by prompting LLMs with the correct version of it.
Sijie Cheng
,
Zhiyong Wu
,
Jiangjie Chen
,
Zhixing Li
,
Yang Liu
,
Lingpeng Kong
PDF
Cite
Code
Poster
Slides
DOI
Theme I - Text Reasoning, Being Right for the Right Reasons 🤔
For humans, intuitive inferences are made every now and then. However, it would require reasons for humans to convince others and justify themselves of their inferences or decisions. How can machines better convince humans of their predictions? The key may lie in being right for the right and faithful reasons.
Feb 27, 2022
»
Cite
×