Reality-Aware Networks: VITag
RAN4model_dfv4p4 provides you with the convenient synchronized format for downstream tasks. In this document, we take one subject in scene4 from one outdoor sequence as an example to demonstrate the format.
Detailed data description is shown in: https://github.com/bryanbocao/vitag/blob/main/DATA.md.
Official Dataset (Raw Data) link: https://sites.google.com/winlab.rutgers.edu/vi-fidataset/home.
paperswithcode link: https://paperswithcode.com/dataset/vi-fi-multi-modal-dataset.
The related papers were accepted in SECON 2022:
Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Kristin Dana, Ashwin Ashok, Shubham Jain, ViTag: Online WiFi Fine Time Measurements Aided Vision-Motion Identity Association in Multi-person Environments, 2022 19th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON).
Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Kristin Dana, Ashwin Ashok, Shubham Jain, Demo: Tagging Vision with Smartphone Identities by Vision2Phone Translation, 2022 19th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). Received Best Demonstration Award.