Jiangjie Chen
Jiangjie Chen
Home
News
Experience
Awards
Featured
Recent
Topics
Publications
CV
Light
Dark
Automatic
Large Language Models
Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena
We propose AucArena to tests LLMs in auctions, showing they can strategize but with variable success, indicating potential for enhancement.
Jiangjie Chen
,
Siyu Yuan
,
Rong Ye
,
Bodhisattwa Prasad Majumder
,
Kyle Richardson
PDF
Cite
Demo
Translate Meanings, Not Just Words: IdiomKB's Role in Optimizing Idiomatic Translation with Language Models
We propose a multilingual idiom KB (IDIOMKB) developed using LLMs to facilitate better idiomatic translation by smaller models by retrieving idioms’ figurative meanings.
Shuang Li
,
Jiangjie Chen
,
Siyu Yuan
,
Xinyi Wu
,
Hao Yang
,
Shimin Tao
,
Yanghua Xiao
PDF
Cite
Code
Adaptive Chameleon or Stubborn Sloth: Unraveling the Behavior of Large Language Models in Knowledge Clashes
We present the first comprehensive and controlled investigation into the behavior of large language models when encountering knowledge conflicts.
Jian Xie
,
Kai Zhang
,
Jiangjie Chen
,
Renze Lou
,
Yu Su
PDF
Code
Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction
We propose a scientific analogical reasoning benchmark with structure abduction, SCAR, and show that large language models make reasonable scientific analogies after structure abduction.
Siyu Yuan
,
Jiangjie Chen
,
Xuyang Ge
,
Yanghua Xiao
,
Deqing Yang
PDF
Code
AnalogyKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
A million-scale analogy KB derived from existing KGs, to enable large language models to achieve analogical reasoning skills.
Siyu Yuan
,
Jiangjie Chen
,
Changzhi Sun
,
Jiaqing Liang
,
Yanghua Xiao
,
Deqing Yang
PDF
Code
Distilling Script Knowledge from Large Language Models for Constrained Language Planning
We propose an over-generate-then-filter approach to improve large language models (LLMs) on constrained language planning, and use it to distill a novel constrained language planning dataset, CoScript.
Siyu Yuan
,
Jiangjie Chen
,
Ziquan Fu
,
Xuyang Ge
,
Soham Shah
,
Charles Robert Jankowski
,
Yanghua Xiao
,
Deqing Yang
PDF
Cite
Poster
Slides
Code
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
We find that large language models (LLMs) speak too positively about negative commonsense knowledge, which is caused by statistical shortcuts and negation reporting bias from language modeling pre-training.
Jiangjie Chen
,
Wei Shi
,
Ziquan Fu
,
Sijie Cheng
,
Lei Li
,
Yanghua Xiao
PDF
Cite
Poster
Slides
Code
Cite
×