For each partition of a data set into a given number of parts there is a partition such that every part is as much as possible a good model (an "algorithmic sufficient statistic") for the data in that part. Since this can be done for every number between one and the number of data, the result is a function, the cluster structure function. It maps the number of parts of a partition to values related to the deficiencies of being good models by the parts. Such a function starts with a value at least zero for no partition of the data set and descents to zero for the partition of the data set into singleton parts. The optimal clustering is the one chosen to minimize the cluster structure function. The theory behind the method is expressed in algorithmic information theory (Kolmogorov complexity). In practice the Kolmogorov complexities involved are approximated by a concrete compressor. We give examples using real data sets: the MNIST handwritten digits and the segmentation of real cells as used in stem cell research.
翻译:对于将数据集分成给定数量部分的每一部分,都有一个分区,这样每个部分都尽可能成为该部分数据的良好模型(一个“计算足够的统计”)。由于可以对数据数和数据数之间的每一个数字进行这种分析,因此结果是一个函数,组群结构函数。它绘制分区的部件数与各部分作为好模型的缺陷有关的数值。这种函数从一个数值至少为零开始,对于数据集的不分割和从零下降到零,将数据集分成单吨部分。最佳组合法是选择的最小化组群结构函数。方法背后的理论以算法信息理论(Kolmogorov 复杂程度)表示。在实践中,所涉及的科多洛夫复杂程度由具体的压缩器进行近似。我们用真实的数据集来举例:MNIST手写的数字和在干细胞研究中使用的实际单元格的分解。