This paper introduces \sysname, a system that accelerates vision-guided physical property reasoning to enable augmented visual cognition. \sysname minimizes the run-time latency of this reasoning pipeline through a combination of both algorithmic and systematic optimizations, including rapid geometric 3D reconstruction, efficient semantic feature fusion, and parallel view encoding. Through these simple yet effective optimizations, \sysname reduces the end-to-end latency of this reasoning pipeline from 10--20 minutes to less than 6 seconds. A head-to-head comparison on the ABO dataset shows that \sysname achieves this 62.9$\times$--287.2$\times$ speedup while not only reaching on-par (and sometimes slightly better) object-level physical property estimation accuracy(e.g. mass), but also demonstrating superior performance in material segmentation and voxel-level inference than two SOTA baselines. We further combine gaze-tracking with \sysname to localize the object of interest in cluttered, real-world environments, streamlining the physical property reasoning on smart glasses. The case study with Meta Aria Glasses conducted at an IKEA furniture store demonstrates that \sysname achives consistently high performance compared to controlled captures, providing robust property estimations even with fewer views in real-world scenarios.
翻译:本文介绍了\\sysname,一个加速视觉引导的物理属性推理以实现增强视觉认知的系统。\\sysname通过算法与系统优化的结合,包括快速几何三维重建、高效语义特征融合以及并行视图编码,最小化该推理流程的运行延迟。通过这些简单而有效的优化,\\sysname将该推理流程的端到端延迟从10-20分钟降低至不到6秒。在ABO数据集上的直接比较显示,\\sysname实现了62.9$\\times$--287.2$\\times$的加速,不仅在物体级物理属性估计精度(如质量)上达到相当(有时略优)水平,而且在材质分割和体素级推理方面表现出优于两个SOTA基线的性能。我们进一步将视线追踪与\\sysname结合,在杂乱的真实世界环境中定位感兴趣物体,简化了智能眼镜上的物理属性推理。在宜家家具店使用Meta Aria眼镜进行的案例研究表明,与受控拍摄相比,\\sysname在真实场景中即使使用较少视图仍能提供稳健的属性估计,保持了一致的高性能。