Near-field perception is essential for the safe operation of autonomous mobile robots (AMRs) in manufacturing environments. Conventional ranging sensors such as light detection and ranging (LiDAR) and ultrasonic devices provide broad situational awareness but often fail to detect small objects near the robot base. To address this limitation, this paper presents a three-tier near-field perception framework. The first approach employs light-discontinuity detection, which projects a laser stripe across the near-field zone and identifies interruptions in the stripe to perform fast, binary cutoff sensing for obstacle presence. The second approach utilizes light-displacement measurement to estimate object height by analyzing the geometric displacement of a projected stripe in the camera image, which provides quantitative obstacle height information with minimal computational overhead. The third approach employs a computer vision-based object detection model on embedded AI hardware to classify objects, enabling semantic perception and context-aware safety decisions. All methods are implemented on a Raspberry Pi 5 system, achieving real-time performance at 25 or 50 frames per second. Experimental evaluation and comparative analysis demonstrate that the proposed hierarchy balances precision, computation, and cost, thereby providing a scalable perception solution for enabling safe operations of AMRs in manufacturing environments.
翻译:近场感知对于自主移动机器人在制造环境中的安全运行至关重要。传统的测距传感器如激光雷达和超声波设备虽能提供广泛的环境感知,但常无法检测到机器人底座附近的小型物体。为应对这一局限,本文提出了一种三层近场感知框架。第一种方法采用光间断检测技术,通过在近场区域投射激光条纹并识别条纹中断来实现对障碍物存在的快速二值化截断感知。第二种方法利用光位移测量,通过分析相机图像中投射条纹的几何位移来估算物体高度,该方法能以最小计算开销提供定量的障碍物高度信息。第三种方法在嵌入式人工智能硬件上部署基于计算机视觉的目标检测模型,实现对物体的分类,从而支持语义感知和情境感知的安全决策。所有方法均在树莓派5系统上实现,达到了每秒25或50帧的实时性能。实验评估与对比分析表明,所提出的分层架构在精度、计算量和成本之间取得了平衡,从而为制造环境中自主移动机器人的安全运行提供了一种可扩展的感知解决方案。