The ability of autonomous vehicles to maintain an accurate trajectory within their road lane is crucial for safe operation. This requires detecting the road lines and estimating the car relative pose within its lane. Lateral lines are usually retrieved from camera images. Still, most of the works on line detection are limited to image mask retrieval and do not provide a usable representation in world coordinates. What we propose in this paper is a complete perception pipeline based on monocular vision and able to retrieve all the information required by a vehicle lateral control system: road lines equation, centerline, vehicle heading and lateral displacement. We evaluate our system by acquiring data with accurate geometric ground truth. To act as a benchmark for further research, we make this new dataset publicly available at http://airlab.deib.polimi.it/datasets/.

0
下载
关闭预览

相关内容

《计算机信息》杂志发表高质量的论文,扩大了运筹学和计算的范围,寻求有关理论、方法、实验、系统和应用方面的原创研究论文、新颖的调查和教程论文,以及描述新的和有用的软件工具的论文。官网链接:https://pubsonline.informs.org/journal/ijoc

Transporting objects using aerial robots has been widely studied in the literature. Still, those approaches always assume that the connection between the quadrotor and the load is made in a previous stage. However, that previous stage usually requires human intervention, and autonomous procedures to locate and attach the object are not considered. Additionally, most of the approaches assume cables as rigid links, but manipulating cables requires considering the state when the cables are hanging. In this work, we design and control a catenary robot. Our robot is able to transport hook-shaped objects in the environment. The robotic system is composed of two quadrotors attached to the two ends of a cable. By defining the catenary curve with five degrees of freedom, position in 3-D, orientation in the z-axis, and span, we can drive the two quadrotors to track a given trajectory. We validate our approach with simulations and real robots. We present four different scenarios of experiments. Our numerical solution is computationally fast and can be executed in real-time.

0
0
下载
预览

We demonstrate a successful navigation and docking control system for the John Deere Tango autonomous mower, using only a single camera as the input. This vision-only system is of interest because it is inexpensive, simple for production, and requires no external sensing. This is in contrast to existing systems that rely on integrated position sensors and global positioning system (GPS) technologies. To produce our system we combined a state-of-the-art object detection architecture, You Only Look Once (YOLO), with a reinforcement learning (RL) architecture, Double Deep QNetworks (Double DQN). The object detection network identifies features on the mower and passes its output to the RL network, providing it with a low-dimensional representation that enables rapid and robust training. Finally, the RL network learns how to navigate the machine to the desired spot in a custom simulation environment. When tested on mower hardware, the system is able to dock with centimeter-level accuracy from arbitrary initial locations and orientations.

0
0
下载
预览

In order to explore robotic grasping in unstructured and dynamic environments, this work addresses the visual perception phase involved in the task. This phase involves the processing of visual data to obtain the location of the object to be grasped, its pose and the points at which the robot`s grippers must make contact to ensure a stable grasp. For this, the Cornell Grasping dataset is used to train a convolutional neural network that, having an image of the robot`s workspace, with a certain object, is able to predict a grasp rectangle that symbolizes the position, orientation and opening of the robot`s grippers before its closing. In addition to this network, which runs in real-time, another one is designed to deal with situations in which the object moves in the environment. Therefore, the second network is trained to perform a visual servo control, ensuring that the object remains in the robot`s field of view. This network predicts the proportional values of the linear and angular velocities that the camera must have so that the object is always in the image processed by the grasp network. The dataset used for training was automatically generated by a Kinova Gen3 manipulator. The robot is also used to evaluate the applicability in real-time and obtain practical results from the designed algorithms. Moreover, the offline results obtained through validation sets are also analyzed and discussed regarding their efficiency and processing speed. The developed controller was able to achieve a millimeter accuracy in the final position considering a target object seen for the first time. To the best of our knowledge, we have not found in the literature other works that achieve such precision with a controller learned from scratch. Thus, this work presents a new system for autonomous robotic manipulation with high processing speed and the ability to generalize to several different objects.

0
0
下载
预览

A flexible operation of multiple robotic manipulators in a shared workspace requires an online trajectory planning with static and dynamic collision avoidance. In this work, we propose a real-time capable motion control algorithm, based on non-linear model predictive control, which accounts for static and dynamic collision avoidance. The proposed algorithm is formulated as a non-cooperative game, where each robot is considered as an agent. Each agent optimizes its own motion and accounts for the predicted movement of surrounding agents. We propose a novel approach to formulate the dynamic collision constraints. Additionally, we account for deadlocks that might occur in a setup of multiple robotic manipulators. We validate our algorithm on a pick and place scenario for four collaborative robots operating in a common workspace in the simulation environment Gazebo. The robots are controlled by the Robot Operating System (ROS). We demonstrate, that our approach is real-time capable and, due to the distributed nature of the approach, easily scales to an arbitrary number of robot manipulators in a shared workspace.

0
0
下载
预览

This paper presents an optimization-based collision avoidance trajectory generation method for autonomous driving in free-space environments, with enhanced robustness, driving comfort and efficiency. Starting from the hybrid optimization-based framework, we introduces two warm start methods, temporal and dual variable warm starts, to improve the efficiency. We also reformulate the problem to improve the robustness and efficiency. We name this new algorithm TDR-OBCA. With these changes, compared with original hybrid optimization we achieve a 96.67% failure rate decrease with respect to initial conditions, 13.53% increase in driving comforts and 3.33% to 44.82% increase in planner efficiency as obstacles number scales. We validate our results in hundreds of simulation scenarios and hundreds of hours of public road tests in both U.S. and China. Our source code is available at https://github.com/ApolloAuto/apollo.

0
0
下载
预览

Since DARPA Grand Challenges (rural) in 2004/05 and Urban Challenges in 2007, autonomous driving has been the most active field of AI applications. Almost at the same time, deep learning has made breakthrough by several pioneers, three of them (also called fathers of deep learning), Hinton, Bengio and LeCun, won ACM Turin Award in 2019. This is a survey of autonomous driving technologies with deep learning methods. We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc. Due to the limited space, we focus the analysis on several key areas, i.e. 2D and 3D object detection in perception, depth estimation from cameras, multiple sensor fusion on the data, feature and task level respectively, behavior modelling and prediction of vehicle driving and pedestrian trajectories.

0
6
下载
预览

We propose a 3D object detection method for autonomous driving by fully exploiting the sparse and dense, semantic and geometry information in stereo imagery. Our method, called Stereo R-CNN, extends Faster R-CNN for stereo inputs to simultaneously detect and associate object in left and right images. We add extra branches after stereo Region Proposal Network (RPN) to predict sparse keypoints, viewpoints, and object dimensions, which are combined with 2D left-right boxes to calculate a coarse 3D object bounding box. We then recover the accurate 3D bounding box by a region-based photometric alignment using left and right RoIs. Our method does not require depth input and 3D position supervision, however, outperforms all existing fully supervised image-based methods. Experiments on the challenging KITTI dataset show that our method outperforms the state-of-the-art stereo-based method by around 30% AP on both 3D detection and 3D localization tasks. Code will be made publicly available.

0
4
下载
预览

3D vehicle detection and tracking from a monocular camera requires detecting and associating vehicles, and estimating their locations and extents together. It is challenging because vehicles are in constant motion and it is practically impossible to recover the 3D positions from a single image. In this paper, we propose a novel framework that jointly detects and tracks 3D vehicle bounding boxes. Our approach leverages 3D pose estimation to learn 2D patch association overtime and uses temporal information from tracking to obtain stable 3D estimation. Our method also leverages 3D box depth ordering and motion to link together the tracks of occluded objects. We train our system on realistic 3D virtual environments, collecting a new diverse, large-scale and densely annotated dataset with accurate 3D trajectory annotations. Our experiments demonstrate that our method benefits from inferring 3D for both data association and tracking robustness, leveraging our dynamic 3D tracking dataset.

0
7
下载
预览

We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50% of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.

0
7
下载
预览

Online multi-object tracking (MOT) is extremely important for high-level spatial reasoning and path planning for autonomous and highly-automated vehicles. In this paper, we present a modular framework for tracking multiple objects (vehicles), capable of accepting object proposals from different sensor modalities (vision and range) and a variable number of sensors, to produce continuous object tracks. This work is inspired by traditional tracking-by-detection approaches in computer vision, with some key differences - First, we track objects across multiple cameras and across different sensor modalities. This is done by fusing object proposals across sensors accurately and efficiently. Second, the objects of interest (targets) are tracked directly in the real world. This is a departure from traditional techniques where objects are simply tracked in the image plane. Doing so allows the tracks to be readily used by an autonomous agent for navigation and related tasks. To verify the effectiveness of our approach, we test it on real world highway data collected from a heavily sensorized testbed capable of capturing full-surround information. We demonstrate that our framework is well-suited to track objects through entire maneuvers around the ego-vehicle, some of which take more than a few minutes to complete. We also leverage the modularity of our approach by comparing the effects of including/excluding different sensors, changing the total number of sensors, and the quality of object proposals on the final tracking result.

0
6
下载
预览
小贴士
相关论文
Diego S. D'antonio,Gustavo A. Cardona,David Saldaña
0+阅读 · 3月2日
Eduardo Godinho Ribeiro,Raul de Queiroz Mendes,Valdir Grassi Jr
0+阅读 · 2月28日
Nigora Gafur,Gajanan Kanagalingam,Martin Ruskowski
0+阅读 · 2月28日
Runxin He,Jinyun Zhou,Shu Jiang,Yu Wang,Jiaming Tao,Shiyu Song,Jiangtao Hu,Jinghao Miao,Qi Luo
0+阅读 · 2月28日
Stereo R-CNN based 3D Object Detection for Autonomous Driving
Peiliang Li,Xiaozhi Chen,Shaojie Shen
4+阅读 · 2019年2月26日
Joint Monocular 3D Vehicle Detection and Tracking
Hou-Ning Hu,Qi-Zhi Cai,Dequan Wang,Ji Lin,Min Sun,Philipp Krähenbühl,Trevor Darrell,Fisher Yu
7+阅读 · 2018年12月2日
Alexander Grabner,Peter M. Roth,Vincent Lepetit
7+阅读 · 2018年3月30日
相关资讯
视频目标识别资源集合
专知
10+阅读 · 2019年6月15日
动物脑的好奇心和强化学习的好奇心
CreateAMind
6+阅读 · 2019年1月26日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
10+阅读 · 2018年12月24日
已删除
将门创投
7+阅读 · 2018年11月5日
计算机视觉近一年进展综述
机器学习研究会
6+阅读 · 2017年11月25日
Capsule Networks解析
机器学习研究会
10+阅读 · 2017年11月12日
【推荐】全卷积语义分割综述
机器学习研究会
17+阅读 · 2017年8月31日
Top