The paper establishes generalization bounds for multitask deep neural networks using operator-theoretic techniques. The authors propose a tighter bound than those derived from conventional norm based methods by leveraging small condition numbers in the weight matrices and introducing a tailored Sobolev space as an expanded hypothesis space. This enhanced bound remains valid even in single output settings, outperforming existing Koopman based bounds. The resulting framework maintains key advantages such as flexibility and independence from network width, offering a more precise theoretical understanding of multitask deep learning in the context of kernel methods.
翻译:本文运用算子理论方法,为多任务深度神经网络建立了泛化边界。通过利用权重矩阵的小条件数,并引入定制化的Sobolev空间作为扩展假设空间,作者提出了比传统基于范数方法更紧致的边界。该增强边界即使在单输出场景下依然有效,其性能超越了现有基于Koopman算子的边界。所得框架保持了灵活性与网络宽度无关性等关键优势,为核方法背景下的多任务深度学习提供了更精确的理论理解。