Emotion is well-recognized as a distinguished symbol of human beings, and it plays a crucial role in our daily lives. Existing vision-based or sensor-based solutions are either obstructive to use or rely on specialized hardware, hindering their applicability. This paper introduces EmoSense, a first-of-its-kind wireless emotion sensing system driven by computational intelligence. The basic methodology is to explore the physical expression of emotions from wireless channel response via data mining. The design and implementation of EmoSense {face} two major challenges: extracting physical expression from wireless channel data and recovering emotion from the corresponding physical expression. For the former, we present a Fresnel zone based theoretical model depicting the fingerprint of the physical expression on channel response. For the latter, we design an efficient computational intelligence driven mechanism to recognize emotion from the corresponding fingerprints. We prototyped EmoSense on the commodity WiFi infrastructure and compared it with main-stream sensor-based and vision-based approaches in the real-world scenario. The numerical study over $3360$ cases confirms that EmoSense achieves a comparable performance to the vision-based and sensor-based rivals under different scenarios. EmoSense only leverages the low-cost and prevalent WiFi infrastructures and thus constitutes a tempting solution for emotion sensing.
翻译:情感被公认为人类的杰出象征,在日常生活中发挥着关键作用。现有的视觉或传感器解决方案要么阻碍使用或依赖专门硬件,阻碍其应用。本文介绍了EmoSense,这是由计算智能驱动的首创型无线情感感应系统。基本方法是通过数据挖掘探索无线频道反应中情感的物理表达方式。EmoSense {face}的设计和实施两大挑战:从无线频道数据中提取物理表达方式,从相应的物理表达方式中恢复情感。对于前者,我们展示了一个基于冷冻区的理论模型,描述频道反应中物理表达方式的指纹。对于后者,我们设计了一个高效的计算智能驱动机制,以识别相应指纹中的情感。我们制作了EmoSense关于商品Wifi基础设施的模型,并将它与现实世界情景中基于主流传感器和基于视觉的方法进行比较。超过3360万美元的案例证实EmoSense只实现了与基于视觉和传感器的数据表达方式相比的功能。我们展示了一个基于传感器的理论模型的理论模型模型,从而形成了一种用于不同图像的甚高频定位和高频定位的图像。