Jiangjie Chen
Jiangjie Chen
Home
News
Experience
Awards
Featured
Recent
Topics
Publications
CV
Light
Dark
Automatic
Planning
EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms
We introduce EvoAgent, a method using evolutionary algorithms to automatically expand expert agents into multi-agent systems, enhancing the task-solving capabilities of large language model-based agents without additional human design.
Siyu Yuan
,
Kaitao Song
,
Jiangjie Chen
,
Xu Tan
,
Dongsheng Li
,
Deqing Yang
PDF
Cite
Code
SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals
We introduce SelfGoal, an automatic approach that enhances language agents’ capabilities to achieve high-level goals with limited instructions and delayed feedback by adaptively breaking down goals into practical subgoals.
Ruihan Yang
,
Jiangjie Chen
,
Yikai Zhang
,
Siyu Yuan
,
Aili Chen
,
Kyle Richardson
,
Yanghua Xiao
,
Deqing Yang
PDF
Cite
Code
TimeArena: Shaping Efficient Multitasking Language Agents in a Time-Aware Simulation
TimeArena enhances LLMs with temporal dynamics for better multitasking, showing advanced models like GPT-4 still trail behind human temporal awareness.
Yikai Zhang
,
Siyu Yuan
,
Caiyu Hu
,
Kyle Richardson
,
Yanghua Xiao
,
Jiangjie Chen
PDF
Cite
Project
TravelPlanner: A Benchmark for Real-World Planning with Language Agents
We introduced TravelPlanner, a benchmark for assessing language agents’ planning abilities, showing that even advanced models like GPT-4 face difficulties with complex tasks.
Jian Xie
,
Kai Zhang
,
Jiangjie Chen
,
Tinghui Zhu
,
Renze Lou
,
Yuandong Tian
,
Yanghua Xiao
,
Yu Su
PDF
Cite
Dataset
Code
Demo
Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena
We propose AucArena to tests LLMs in auctions, showing they can strategize but with variable success, indicating potential for enhancement.
Jiangjie Chen
,
Siyu Yuan
,
Rong Ye
,
Bodhisattwa Prasad Majumder
,
Kyle Richardson
PDF
Cite
Demo
Distilling Script Knowledge from Large Language Models for Constrained Language Planning
We propose an over-generate-then-filter approach to improve large language models (LLMs) on constrained language planning, and use it to distill a novel constrained language planning dataset, CoScript.
Siyu Yuan
,
Jiangjie Chen
,
Ziquan Fu
,
Xuyang Ge
,
Soham Shah
,
Charles Robert Jankowski
,
Yanghua Xiao
,
Deqing Yang
PDF
Cite
Poster
Slides
Code
Cite
×