Jiangjie Chen
Jiangjie Chen
Home
News
Experience
Awards
Featured
Recent
Topics
Publications
CV
Light
Dark
Automatic
Large Language Models
Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction
We propose a scientific analogical reasoning benchmark with structure abduction, SCAR, and show that large language models make reasonable scientific analogies after structure abduction.
Siyu Yuan
,
Jiangjie Chen
,
Xuyang Ge
,
Yanghua Xiao
,
Deqing Yang
PDF
Cite
Code
AnalogyKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
A million-scale analogy KB derived from existing KGs, to enable large language models to achieve analogical reasoning skills.
Siyu Yuan
,
Jiangjie Chen
,
Changzhi Sun
,
Jiaqing Liang
,
Yanghua Xiao
,
Deqing Yang
PDF
Code
Distilling Script Knowledge from Large Language Models for Constrained Language Planning
We propose an over-generate-then-filter approach to improve large language models (LLMs) on constrained language planning, and use it to distill a novel constrained language planning dataset, CoScript.
Siyu Yuan
,
Jiangjie Chen
,
Ziquan Fu
,
Xuyang Ge
,
Soham Shah
,
Charles Robert Jankowski
,
Yanghua Xiao
,
Deqing Yang
PDF
Cite
Poster
Slides
Code
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
We find that large language models (LLMs) speak too positively about negative commonsense knowledge, which is caused by statistical shortcuts and negation reporting bias from language modeling pre-training.
Jiangjie Chen
,
Wei Shi
,
Ziquan Fu
,
Sijie Cheng
,
Lei Li
,
Yanghua Xiao
PDF
Cite
Poster
Slides
Code
«
Cite
×