File
I-24 MOTION Data
File
LADDMS Data Tutorial
The data are used to build the Mobility Equity Metric Dashboard.https://horatioj.shinyapps.io/MobDashboard/Where the source code is available at https://github.com/Horatioj/MobilityDashboardTo reproduce the MEM calculation results, please refer to the file named MEMcode_sample.zip.The following describes folder structure, csv files store calculated results (MEM & MI):MobDashboarddatatransport flow and O/D matrixMetric_dataMEM for transport networkwwwintroduction page picturesbs_memresult1k.csv: Boston MEM results by different combinationsbs_merged_geoid_comm_data.csv: Boston geographic lay

These data are used to playback and test road algorithms using a full-sized vehicle in Ros.Code is available from git at: https://github.com/jmscslgroup/vuCs3891QuickDemonstration1. Create your docker image from the internal Dockerfile to this respository: docker build --tag rosblank .2. create the network to stream the bagfile docker network create localros 3.

RAN4model_dfv4p4 provides you with the convenient synchronized format for downstream tasks. In this document, we take one subject in scene4 from one outdoor sequence as an example to demonstrate the format.Detailed data description is shown in: https://github.com/bryanbocao/vitag/blob/main/DATA.md.Official Dataset (Raw Data) link: https://sites.google.com/winlab.rutgers.edu/vi-fidataset/home.paperswithcode link: https://paperswithcode.com/dataset/vi-fi-multi-modal-dataset.The related papers were accepted in SECON 2022:Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Krist

RAN4model_dfv4p4 (OneDrive data source) provides you with the convenient synchronized format for downstream tasks. In this document, we take one subject in scene4 from one outdoor sequence as an example to demonstrate the format.Detailed data description is shown in: https://github.com/bryanbocao/vitag/blob/main/DATA.md. Official Dataset (Raw Data) link: https://sites.google.com/winlab.rutgers.edu/vi-fidataset/home. paperswithcode link: https://paperswithcode.com/dataset/vi-fi-multi-modal-dataset. The related papers were accepted in SECON 2022: Bryan Bo Cao, Abrar Alali, Ha

RAN4model_dfv4p4 (Google Drive data source) provides you with the convenient synchronized format for downstream tasks. In this document, we take one subject in scene4 from one outdoor sequence as an example to demonstrate the format. Detailed data description is shown in: https://github.com/bryanbocao/vitag/blob/main/DATA.md. Official Dataset (Raw Data) link: https://sites.google.com/winlab.rutgers.edu/vi-fidataset/home. paperswithcode link: https://paperswithcode.com/dataset/vi-fi-multi-modal-dataset. The related papers were accepted in SECON 2022: Bryan Bo Cao, Abrar

The Vi-Fi dataset is a large-scale multi-modal dataset that consists of vision, wireless and smartphone motion sensor data of multiple participants and passer-by pedestrians in both indoor and outdoor scenarios. In Vi-Fi, vision modality includes RGB-D video from a mounted camera. Wireless modality comprises smartphone data from participants including WiFi FTM and IMU measurements. The presence of Vi-Fi dataset facilitates and innovates multi-modal system research, especially, vision-wireless sensor data fusion, association and localization. (Data collection was in accordance with IR

File
Line Planning
This repository contains code to replicate experiments in: Real-Time Approximate Routing for Smart Transit Systems. URL: https://arxiv.org/abs/2103.06212.Usage instructions: to replicate experiments for the Manhattan network (table 2 in the paper):Download all files and unzip files manhattan_dist_1.txt, manhattan_dist_2.txt and manhattan_dist_3.txtRun line_planning.py.The experiments run by default using trip request from April 3rd 2018. To run the experiments for the fhv data from Feb 3 or March 6, uncomment the parameter 'month' line #407 in the file line_instance.py.The code allows to test

Feedback
Feedback
If you experience a bug or would like to see an addition or change on the current page, feel free to leave us a message.