At present, the performance of deep neural network in general object detection is comparable to or even surpasses that of human beings. However, due to the limitations of deep learning itself, the small proportion of feature pixels, and the occurence of blur and occlusion, the detection of small objects in complex scenes is still an open question. But we can not deny that real-time and accurate object detection is fundamental to automatic perception and subsequent perception-based decision-making and planning tasks of autonomous driving. Considering the characteristics of small objects in autonomous driving scene, we proposed a novel method named KB-RANN, which based on domain knowledge, intuitive experience and feature attentive selection. It can focus on particular parts of image features, and then it tries to stress the importance of these features and strengthenes the learning parameters of them. Our comparative experiments on KITTI and COCO datasets show that our proposed method can achieve considerable results both in speed and accuracy, and can improve the effect of small object detection through self-selection of important features and continuous enhancement of proposed method, and deployed it in our self-developed autonomous driving car.
Feature selection has evolved to be an important step in several machine learning paradigms. In domains like bio-informatics and text classification which involve data of high dimensions, feature selection can help in drastically reducing the feature space. In cases where it is difficult or infeasible to obtain sufficient number of training examples, feature selection helps overcome the curse of dimensionality which in turn helps improve performance of the classification algorithm. The focus of our research here are five embedded feature selection methods which use either the ridge regression, or Lasso regression, or a combination of the two in the regularization part of the optimization function. We evaluate five chosen methods on five large dimensional datasets and compare them on the parameters of sparsity and correlation in the datasets and their execution times.
There are many problems in machine learning and data mining which are equivalent to selecting a non-redundant, high "quality" set of objects. Recommender systems, feature selection, and data summarization are among many applications of this. In this paper, we consider this problem as an optimization problem that seeks to maximize the sum of a sum-sum diversity function and a non-negative monotone submodular function. The diversity function addresses the redundancy, and the submodular function controls the predictive quality. We consider the problem in big data settings (in other words, distributed and streaming settings) where the data cannot be stored on a single machine or the process time is too high for a single machine. We show that a greedy algorithm achieves a constant factor approximation of the optimal solution in these settings. Moreover, we formulate the multi-label feature selection problem as such an optimization problem. This formulation combined with our algorithm leads to the first distributed multi-label feature selection method. We compare the performance of this method with centralized multi-label feature selection methods in the literature, and we show that its performance is comparable or in some cases is even better than current centralized multi-label feature selection methods.