Reality Aware Networks: VI-FI Dataset
The Vi-Fi dataset is a large-scale multi-modal dataset that consists of vision, wireless and smartphone motion sensor data of multiple participants and passer-by pedestrians in both indoor and outdoor scenarios. In Vi-Fi, vision modality includes RGB-D video from a mounted camera. Wireless modality comprises smartphone data from participants including WiFi FTM and IMU measurements.
The presence of Vi-Fi dataset facilitates and innovates multi-modal system research, especially, vision-wireless sensor data fusion, association and localization.
(Data collection was in accordance with IRB protocols and subject faces have been blurred for subject privacy.)
The Vi-Fi dataset has been used for various tasks to tackle real-world challenges in several successful publications, including (1) Multimodal Association of vision and phone data (Liu et al. [Mobisys'21 Demo] [IPSN'22], Cao et al. [SECON'22] [SECON'22 demo]); (2) Visual Trajectory Reconstruction from Phone Data (Cao et al. [ISACom'23 @MobiCom'23]) and (3) Out-of-Sight Trajectory Estimation (Zhang et al. [CVPR'24]). We welcome researchers to propose their novel tasks. What's your next new task?