Evaluations of large language models (LLMs)' creativity have focused primarily on the quality of their outputs rather than the processes that shape them. This study takes a process-oriented approach, drawing on narratology to examine LLMs as computational authors. We introduce constraint-based decision-making as a lens for authorial creativity. Using controlled prompting to assign authorial personas, we analyze the creative preferences of the models. Our findings show that LLMs consistently emphasize Style over other elements, including Character, Event, and Setting. By also probing the reasoning the models provide for their choices, we show that distinctive profiles emerge across models and argue that our approach provides a novel systematic tool for analyzing AI's authorial creativity.
翻译:当前对大型语言模型(LLM)创造力的评估主要聚焦于其输出质量,而非塑造这些输出的过程。本研究采用面向过程的方法,借鉴叙事学理论,将LLM视为计算型作者。我们引入基于约束的决策制定作为分析作者创造力的视角。通过受控提示技术赋予模型作者角色,我们分析了模型的创作偏好。研究发现,LLM始终将"风格"置于其他叙事元素(包括人物、事件和场景)之上。通过进一步探究模型为其选择提供的推理依据,我们发现不同模型呈现出独特的创作特征图谱。我们认为,该方法为分析人工智能的作者创造力提供了一种新颖的系统化工具。