This article presents the creation of an Estonian-language dataset for document-level subjectivity, analyzes the resulting annotations, and reports an initial experiment of automatic subjectivity analysis using a large language model (LLM). The dataset comprises of 1,000 documents-300 journalistic articles and 700 randomly selected web texts-each rated for subjectivity on a continuous scale from 0 (fully objective) to 100 (fully subjective) by four annotators. As the inter-annotator correlations were moderate, with some texts receiving scores at the opposite ends of the scale, a subset of texts with the most divergent scores was re-annotated, with the inter-annotator correlation improving. In addition to human annotations, the dataset includes scores generated by GPT-5 as an experiment on annotation automation. These scores were similar to human annotators, however several differences emerged, suggesting that while LLM based automatic subjectivity scoring is feasible, it is not an interchangeable alternative to human annotation, and its suitability depends on the intended application.
翻译:本文介绍了为文档级主观性构建爱沙尼亚语数据集的过程,分析了所得标注结果,并报告了使用大语言模型(LLM)进行自动主观性分析的初步实验。该数据集包含1,000份文档——300篇新闻文章和700篇随机选取的网络文本——每份文档由四位标注者根据从0(完全客观)到100(完全主观)的连续标度进行主观性评分。由于标注者间相关性中等,且部分文本的评分位于标度两端,我们对评分差异最大的文本子集进行了重新标注,标注者间相关性有所提升。除人工标注外,该数据集还包含由GPT-5生成的评分,作为标注自动化的实验。这些评分与人工标注者相似,但也存在若干差异,表明基于LLM的自动主观性评分虽具可行性,但并非人工标注的可互换替代方案,其适用性取决于具体应用场景。