Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction

An example of establishing an analogy between two systems across different domains.

Abstract

Analogical reasoning is essential for human cognition, allowing us to comprehend new concepts by relating them to familiar ones based on common relational structures. Previous work mainly focuses on word analogies, which do not fully represent the analogical reasoning ability of language models (LMs). This paper first examines analogy prompting for large language models (LLMs) in scientific question-answering tasks. Then we discover that LLMs tend to ignore relational structures when performing word analogies, casting doubt on their utility for evaluating analogical reasoning. For better evaluation, we propose an analogical structure abduction task based on cognitive psychology, which aims to abduct structures between two systems to establish an analogy. Then we create a benchmark of scientific analogical reasoning with structure abduction, SCAR, consisting of 400 scientific analogies across 13 domains for this task. Empirical results reveal that LLMs struggle with this task, but the Chain-of-Thought (CoT) method with background knowledge and explanations can improve their capability.

Type
Publication
In The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023) - Findings
Jiangjie Chen
Jiangjie Chen
Ph.D. Candidate

His research interests mainly include natural language reasoning and large language models.