Chain-of-Thought (CoT) is widely applied to improve the LLM capability in math, coding and reasoning tasks. However, its performance is limited for open-domain tasks since there are no clearly defined reasoning steps or logical transitions. To mitigate such challenges, we propose another prompt-based paradigm called Chain of Conceptual Thought (CoCT), where the LLM first tags a concept, then generates the detailed content. The chain of concepts is allowed within the utterance, encouraging the LLM's deep and strategic thinking. We experiment with this paradigm in daily and emotional support conversations where the concept is comprised of emotions, strategies and topics. Automatic, human and model evaluations suggest that CoCT surpasses baselines such as Self-Refine, ECoT, ToT, SoT and RAG, suggesting a potential effective prompt-based paradigm of LLM for a wider scope of tasks.
翻译:思维链(CoT)被广泛应用于提升大型语言模型在数学、编码和推理任务中的能力。然而,对于开放域任务,由于缺乏明确定义的推理步骤或逻辑转换,其性能受到限制。为缓解此类挑战,我们提出另一种基于提示的范式,称为概念链式思考(CoCT),其中大型语言模型首先标注一个概念,然后生成详细内容。概念链被允许在话语内部形成,从而鼓励大型语言模型进行深度和策略性思考。我们在日常对话和情感支持对话中对此范式进行了实验,其中概念由情感、策略和主题构成。自动评估、人工评估和模型评估均表明,CoCT超越了诸如自我优化、ECoT、ToT、SoT和RAG等基线方法,这表明基于提示的LLM范式在更广泛的任务范围内具有潜在的有效性。