For time series classification task using 1D-CNN, the selection of kernel size is critically important to ensure the model can capture the right scale salient signal from a long time-series. Most of the existing work on 1D-CNN treats the kernel size as a hyper-parameter and tries to find the proper kernel size through a grid search which is time-consuming and is inefficient. This paper theoretically analyses how kernel size impacts the performance of 1D-CNN. Considering the importance of kernel size, we propose a novel Omni-Scale 1D-CNN (OS-CNN) architecture to capture the proper kernel size during the model learning period. A specific design for kernel size configuration is developed which enables us to assemble very few kernel-size options to represent more receptive fields. The proposed OS-CNN method is evaluated using the UCR archive with 85 datasets. The experiment results demonstrate that our method is a stronger baseline in multiple performance indicators, including the critical difference diagram, counts of wins, and average accuracy. We also published the experimental source codes at GitHub (https://github.com/Wensi-Tang/OS-CNN/).
翻译:对于使用 1D-CNN 的时间序列分类任务,选择内核大小对于确保模型能够从长时序列中捕捉到右比例显著信号至关重要。关于 1D-CNN 的现有工作大多将内核尺寸视为超参数,并试图通过耗时且效率低下的网格搜索找到适当的内核大小。本文从理论上分析了内核大小如何影响1D-CNN 的性能。考虑到内核大小的重要性,我们提议了一个新的Omni-sulate 1D-CNN (OS-CNN) 结构,以捕捉模型学习期间的适当内核大小。正在开发一个特定的内核尺寸配置设计,以使我们能够收集极少的内核尺寸选项,以代表更能容纳的字段。拟议的OS-CN 方法是利用具有85个数据集的UCR档案进行评估的。实验结果表明,我们的方法是多个性能指标的较强基线,包括关键差异图表、赢取数和平均准确性。我们还在GiHubsi/Gigh-CN 公布了试验源代码(http://gis-Gibs/Giorg-W)。