Theme II - Towards Factual, Controllable and Versatile Text Generation 🤖
The success of deep text generation is being held back by its non-factuality and difficulty to control. How to unleash the power of text generation models while restraining them from talking gibberish? In this research theme, we explore the potentials of text generation w.r.t. factuality and controllability.
To achieve these goals, one of the key perspectives is to incorporate various prior knowledge into neural models, including logical rules, templates, external knowledge bases, etc. Exemplar papers in this theme include:
- ACT (NAACL 2022): improving lexically constrained non-autoregressive machine translation, especially under low-frequency ones;
- LOREN (AAAI 2022): interpretable fact verification against trustworthy knowledge bases;
- FalCon (DASFAA 2022): faithful response generation with contrastive learning;
- KeDy (WSDM 2022) and Keep (NLPCC 2021): diversified generation guided by knowledge graphs;
- HedModTmpl (ACL 2019): generating faithful entity type descriptions constrained by head-modifier templates.