We introduce a new fundamental algorithm called Matrix-POAFD to solve the matrix least square problem. The method is based on the matching pursuit principle. The method directly extracts, among the given features as column vectors of the measurement matrix, in the order of their importance, the decisive features for the observing vector. With competitive computational efficiency to the existing sophisticated least square solutions the proposed method, due to its explicit and iterative algorithm process, has the advantage of trading off minimum norms with tolerable error scales. The method inherits recently developed studies in functional space contexts. The second main contribution, also in the algorithm aspect, is to present a two-step iterative computation method for pseudo-inverse. We show that consecutively performing two least square solutions, of which one is to $X$ and the other to $X^*,$ results in the minimum norm least square solution. The two-step algorithm can also be combined into one solving a single least square problem but with respect to $XX^\ast.$ The result is extended to the functional formulation as well. To better explain the idea, as well as for the self-containing purpose, we give short surveys with proofs of key results on closely relevant subjects, including solutions with reproducing kernel Hilbert space setting, AFD type sparse representation in terms of matching pursuit, the general ${\mathcal H}$-$H_K$ formulation and pseudo-inverse of bounded linear operator in Hilbert spaces.
翻译:暂无翻译